coding

Themes

The observant may have noticed that the look of this site has suddenly got rather gaudy. There is a good reason for this - I've been creating a WordPress theme which I want to be as flexible as possible. I.e. a 'bare' theme, which can then be styled easily with css. I know these themes already exist, but I wanted my own, m'kay?

The reason for the gaudy colours? Testing. I wanted the default colours to be fairly obvious, with each main element being distinct so that I could be sure that I had the no typos in the CSS selectors.

I'll leave the theme running for a little while, I'd be interested to hear about any usability issues (other than the horrible colour scheme).

The theme should be fully widgetised, it should have comment threading, avatars and so on.

The theme should be fluid and resize gracefully, as screen width reduces, the images reduce whilst keeping aspect ratio.

Things I know I want to fix - some of these are purely 'behind the scenes':

  1. I want, at most, one sticky post on the front page. At the moment, it'll put any number on.
  2. I want to have a tags page which shows the tag cloud.
  3. Behind the scenes, the entry formatting uses the same template, called when needed by index.php and archive.php. I need to get the single.php template to use this as well in order to minimise maintenance (requires some 'if' statements).
  4. Do I *need* single.php and archive.php once I've done that?
  5. Author pages, ideally automatically pulling in gravatars.
  6. Decent 404 page
  7. The comment form gets screwed up on a narrow screen and doesn't resize gracefully. I don't know why.
  8. I want the theme to be accessible. I.e. Good for screen readers and the like. I have no way to test this however, so if you know anyone with a screen reader, please do point them in this direction and ask them to comment (or, pass on their comments should it *really* be unusable).
    1. Is the order of elements okay?
    2. aural stylesheet hasn't been done, for me, that'd be coding without testing - I would want a sheet though!
    3. The tags/related tags stuff - does there need to be a way to, ideally optionally, skip that for screenreaders.... I wonder how (without introducing new screen cruft)
  9. I'll then package up this rough theme for release, create a duplicate and change the look and feel to customise it. If I find that I have to customise anything other than a stylesheet, I'll need to amend the 'raw' theme.

Things I'm unsure of:

  1. I've appended categories to tags (with a different class for styling)
  2. Do I want to keep the related posts thing? It relies on a plugin, so isn't essential for the theme (plugin may be disabled as a recent update caused a problem behind the scenes... I hope I remember to remove this sentence when fixed, but I've couched it as a conditional just in case)

Linux/Unix Command Line Help

I have the need to take a text file, and issue a command (for use in a script) which will automatically remove all the lines before (or after) a certain point in the text file. Example.

If the file constained FooFooFoo Fido BarBarBar, I could say, don't output anything until you see 'Fido' and then just output the file (which could be redirected to an output file). The result would be, BarBarBar (or possibly Fido BarBarBar)

Or I could say, output everything, until you see 'Fido', then stop (yielding FooFooFoo or possibly FooFooFoo Fido)

I'm sure this must be possible, but I can't see how to do it. Any ideas gratefully received.

Bash Problem - sorted

With thanks to those who tried to help, the final script for my Bash problem is below. Problem:

To grab the fund price from the L&G website for an index tracker. To email the price to me, and to store that price in a CSV file to allow easy import to Quicken. There is no ticker available for auto download to quicken that I could find, so the ticker is made up. (LGTRKFTSE). This means the prices are associated with the correct share.

The script may not be the most elegant of solutions, but it works. There is a slight modification from the version I arrived at in the previous post - that is I needed to shift the decimal point, as the price is quoted in pence, but I need it in pounds.

Note that ^M is entered as ctrl-V ctrl-M

# Get the fund prices for L&G tracker and email me daily

# Get the file cd ~ wget -q http://www.legalandgeneral.com/investment/fundprice1_index.jsp --output-document=fundprices.txt

# Find the data I want grep --after-context=10 "UK Index Trust (Acc) (R)" fundprices.txt | sed -e :a -e 's/<[^>]*>//g;/ ~/fundprices.txt

# Trim Excess tabs tr -d '\t' < fundprices.txt > fundpricesout.txt

# I only want the closing price grep -m 1 '[0-9]' ~/fundpricesout.txt > fundprices.txt

# Shift the decimal point cp fundprices.txt fundpricesout.txt sed 's/[0-9][0-9]\./\.&/g' fundpricesout.txt | sed 's/\.//2' > fundprices.txt

# Output the CSV record for easier import to Quicken echo -n "LGTRKFTSE," >>fundprice.csv cat fundprices.txt >> fundprice.csv echo -n $(date +%d/%m/%Y) >> fundprice.csv echo "BRK" >> fundprice.csv

# Strip off the ctrl M characters sed 's/^M//g' fundprice.csv > fundpricesout.txt cp fundpricesout.txt fundprice.csv

# This bit reformats the file cat fundprice.csv | tr '\n' ',' > fundpricesout.txt sed 's/BRK,/\n/g' fundpricesout.txt > fundprice.csv cp fundprice.csv fundpricesout.txt sed 's/,LG/\nLG/g' fundprice.csv > fundpricesout.txt # The above is only needed as for some silly reason I couldn't get # rid of the newline in the file containing the price # It's not pretty, but it works

grep '[0-9]\.[0-9]' fundpricesout.txt > fundprice.csv # This bit cleans out any lines without a price. # It sometime happens if there is a network problem, and I am happy to miss a datapoint # as long as the file is in the right format.

# Mail me the price for the day mail -s "Prices for `date +%Y-%m-%d`" myemail@foo.bar < ~/fundprices.txt

# Tidy up a bit rm fundpricesout.txt rm fundprices.txt

This is set to run as a cron job on weekdays. Monthly, I am emailed the csv file.

Bash Problem, strange new line!

I've been working on a BASH problem. I have this little script which, on a weekday (set by crontab) will run. It will grab the price of a unit trust from the legal and general website, clean up the output and email me the price. I want to create a CSV file of all the prices (the TICKER is one of my own choosing - I've not been able to find the correct ticker which will automatically download the prices to quicken)

(Since I first posted this, I've tweaked the script a little, trying to solve the bug - to no avail - I've replaced the code with the version as of 4th Jan 2008 - the bug is still present - note, ^M is actually ctrl-V ctrl-M, not carat-M)

 # Get the fund prices for L&G tracker and email me daily

# Get the file cd ~ wget -q http://www.legalandgeneral.com/investment/fundprice1_index.jsp --output-document=fundprices.txt

# Find the data I want grep --after-context=10 "UK Index Trust (Acc) (R)" fundprices.txt | sed -e :a -e 's/<[^>]*>//g;/ ~/fundprices.txt

# Trim Excess tabs tr -d '\t' < fundprices.txt > fundpricesout.txt

# I only want the closing price grep -m 1 '[0-9]' ~/fundpricesout.txt > fundprices.txt

# Output the CSV record for easier import to Quicken echo -n "LGTRKFTSE," >>fundprice.csv cat fundprices.txt >> fundprice.csv #BUG - THERE IS A NEW LINE CREEPING IN SOMEWHERE - IT NEEDS REMOVING echo -n "," >> fundprice.csv echo $(date +%d/%m/%Y) >> fundprice.csv

# Strip off the ctrl M characters sed 's/^M//g' fundprice.csv > fundpricesout.txt cp fundpricesout.txt fundprice.csv

# Mail me the price for the day mail -s "Prices for `date +%Y-%m-%d`" myemail@address.foo.bar < ~/fundprices.txt

# Tidy up a bit rm fundpricesout.txt rm fundprices.txt 

The script emails me correctly, but unfortunately, I can't get the CSV to work. The output file looks like this:

LGTRKFTSE,169.6 ,02/01/2008

Where it should look like this:

LGTRKFTSE,169.6,02/01/2008

How can I get rid of that annoying new line character? (I want to keep newlines between the entries for different days)

I'm sure it's something subtle but easy - but it's escaping me.... Anyone know the trick?

It was too expensive...

Following the fiasco of losing the personal details of 25 million people it has emerged that the audit office did not request all of the information that was sent:

'the NAO wanted only limited child benefit records but was told in an e-mail from a senior business manager in March that to remove more sensitive information was too costly and complex.

Please correct me if I'm wrong, but this should be trivial for any well set up system. In the commercial 'filemaker' system, one can choose which records to export. If, as is more likely, the database was SQL, one could make a copy of the database and then drop unwanted tables, or fields. For anyone managing an IT system, this should have been trivial.

Someone who is responsible for decisions on such a large and costly database really should be able to manipulate that data easily.

For the record, the SQL syntax (after about 15 secs of research) is:

ALTER TABLE <table_name> DROP <field_name>

or even

DROP TABLE table_name

Methinks the 'it was too expensive' excuse is just so much baloney.

Some Links to finish off: NO2ID - Stop ID cards and the database state

Update: qwghlm has a post on this too.

Countdown

The Countdown on the homepage of this site (update: no longer used) is produced by using a script, run from the crontab. Thanks to a few folks on a certain irc channel for getting me past some mental blocks.

In cron, there is an entry which reads

*/15 * * * * ~/path/to/countdown ~/path/to/datafile > ~/path/to/outputfile

Countdown is shown below, it is a text file chmodded to 755. The datafile is a simple text file with the format

2006-11-23 :: Event Details 2006-12-14 14:23 :: Other Event 2005-09-23 :: \<a href\=\"http://www.murky.org/\"\>Go and look at Murk's Amazon Wishlist\</a\>

The times are optional, note that anything which BASH may misinterpret should be escaped, i.e. prefixed with /

The resulting file can be put into a webpage using a server side include, or some other means.

The data file will automatically be sorted into date order when the script is run. Please note that the script isn't really set up for repeating events. If anyone modifies the script to do this, I would be pleased to learn of the mod.

At some point, I want to limit it to the next N events only, this should be a simple modification, but I don't have the will right now! The script should not display events more than 24 hours old, nor should it display events more than a year hence (actually, it is a little less than a year)

I would also like to be able to specify when the item should appear, i.e. start to show the item 30 days before etc. The format for this would be:

date :: daysbefore :: Event

As an example:

2006-11-23 :: 30 :: Event Details 2006-12-14 14:23 :: 140 :: Other Event 2005-09-23 ::210 :: \<a href\=\"http://www.murky.org/\"\>Go and look at Murk's Amazon Wishlist\</a\>

This is probably the most desired mod from my point of view.

The repeating event mod would change the format to include R for repeat

2006-11-23 :: 30 R :: Event Details 2006-12-14 14:23 :: 140 :: Other Event 2005-09-23 ::210 R :: \<a href\=\"http://www.murky.org/\"\>Go and look at Murk's Amazon Wishlist\</a\>

Now, when an event is past the line would be deleted if there were no R present, and the year would be modified if there were. I imagine this would involve writing a new file line by line and then copying the new file over the old at the end. It would be up to the user to ensure there was a backup (it would not matter if it were outdated, as when pressed into use it would be modified!)

#!/bin/sh

if [ -z "$1" ] ; then echo "I must have a filename to work with" exit 0 fi

# The Five Hours corrects the discrepancy between my server and me # MODIFY AS REQUIRED, My server is WEST of me uktime=$(date --date='5 hours' +%s) uktime2=$(date --date='5 hours' '+%B %d %H:%M')

sort $1 -o $1 echo "<ul class=\"module-list\">" # open file for reading exec 6<$1 # read until end of file while read -u 6 dta do event=$(echo $dta | sed "s@.*:: @@") whenisit=$(echo $dta | sed "s@::.*@@") optime=$(date --date="$whenisit" +%s) optime=$(($optime-$uktime)) if expr \( $optime \< 0 \) > /dev/null then optime=-$optime hours=$((optime/3600)) if [ $hours \< 24 ] then echo "<li>$event" echo " <span class=\"dateline\">(passed within last 24hrs)</span></li>" fi else year=$((optime/30000000)) # this will suppress any events more than a little less than a year off if expr \( $year \< 1 \) > /dev/null then echo -n "<li>$event <span class=\"dateline\">(" secs=$((optime%60)) mins=$((optime/60)) optime=$mins if expr \( $mins \< 60 \) > /dev/null then echo -n "$mins minutes and $secs seconds." else mins=$((optime%60)) hours=$((optime/60)) optime=$hours if expr \( $hours \< 24 \) > /dev/null then echo -n "$hours hour" if [ $hours != 1 ] then echo -n "s" fi #pluralisation echo -n ", $mins mins" else hours=$((optime%24)) days=$((optime/24)) optime=$days echo -n "$days day" if [ $days != 1 ] then echo -n "s" fi #pluralisation echo -n ", $hours hour" if [ $hours != 1 ] then echo -n "s" fi #pluralisation fi #hours fi #minutes echo ")</span></li>" fi #check that it isn't too far fi done

echo "<li class=\"lastupdate\">lastupdated: $uktime2 UK</li>" echo "</ul>"

# close file test.data exec 6<&-

exit 0

SQL Backups

With thanks to Andy Budd's Page, I have finally worked out how to do decent backup/restores.

I created a text file called sqlbackup, in the file is this:

#!/bin/sh

# echo start message echo "Backup Script Processing"

# navigate to backup dir if cd ~/backup.sql/latest then echo "...successfully navigated to backup dir" else echo "could not locate backup directory" exit 0 fi

# echo message echo "exporting SQL dump"

if #dump the db into a .sql file mysqldump --user=SQLUSERNAME --password=SQLPASSWORD.... .... SQLBLOGNAME --opt | gzip -c > backup-mt.sql.gz; then #echo success message echo "SQL dump successful" ls -la else #echo error message echo "mysqldump error" exit 0 fi

My crontab has:

# Backup MySQL 5 0 * * * /home/murk1e58/sqlbackup >> ~/error.log 2>&1 56 23 * * * cp ~/backup.sql/latest/backup-mt.sql.gz ~/backup.sql/daily/$(date +\%A).sql.gz 58 23 28 * * cp ~/backup.sql/latest/backup-mt.sql.gz ~/backup.sql/monthly/$(date +\%B).sql.gz

(The lines marked .... should be one one line, I have split them here to make sure the line is narrower than most screens). To restore the backup, one types:

gunzip backup.sql.gz mysql --user=username --password=password sqldatabasename < backup.sql

All good stuff!

Of course, what I really need is a completely seperate server, with similar features to this one (cpanel, command line etc), then I could sent the backup to it automatically and have it mirror this server!

In the short term, I would like to work out how to email the resulting file..... any ideas?