Linux

Linux/Unix Command Line Help

I have the need to take a text file, and issue a command (for use in a script) which will automatically remove all the lines before (or after) a certain point in the text file. Example.

If the file constained FooFooFoo Fido BarBarBar, I could say, don't output anything until you see 'Fido' and then just output the file (which could be redirected to an output file). The result would be, BarBarBar (or possibly Fido BarBarBar)

Or I could say, output everything, until you see 'Fido', then stop (yielding FooFooFoo or possibly FooFooFoo Fido)

I'm sure this must be possible, but I can't see how to do it. Any ideas gratefully received.

Bash Problem - sorted

With thanks to those who tried to help, the final script for my Bash problem is below. Problem:

To grab the fund price from the L&G website for an index tracker. To email the price to me, and to store that price in a CSV file to allow easy import to Quicken. There is no ticker available for auto download to quicken that I could find, so the ticker is made up. (LGTRKFTSE). This means the prices are associated with the correct share.

The script may not be the most elegant of solutions, but it works. There is a slight modification from the version I arrived at in the previous post - that is I needed to shift the decimal point, as the price is quoted in pence, but I need it in pounds.

Note that ^M is entered as ctrl-V ctrl-M

# Get the fund prices for L&G tracker and email me daily

# Get the file cd ~ wget -q http://www.legalandgeneral.com/investment/fundprice1_index.jsp --output-document=fundprices.txt

# Find the data I want grep --after-context=10 "UK Index Trust (Acc) (R)" fundprices.txt | sed -e :a -e 's/<[^>]*>//g;/ ~/fundprices.txt

# Trim Excess tabs tr -d '\t' < fundprices.txt > fundpricesout.txt

# I only want the closing price grep -m 1 '[0-9]' ~/fundpricesout.txt > fundprices.txt

# Shift the decimal point cp fundprices.txt fundpricesout.txt sed 's/[0-9][0-9]\./\.&/g' fundpricesout.txt | sed 's/\.//2' > fundprices.txt

# Output the CSV record for easier import to Quicken echo -n "LGTRKFTSE," >>fundprice.csv cat fundprices.txt >> fundprice.csv echo -n $(date +%d/%m/%Y) >> fundprice.csv echo "BRK" >> fundprice.csv

# Strip off the ctrl M characters sed 's/^M//g' fundprice.csv > fundpricesout.txt cp fundpricesout.txt fundprice.csv

# This bit reformats the file cat fundprice.csv | tr '\n' ',' > fundpricesout.txt sed 's/BRK,/\n/g' fundpricesout.txt > fundprice.csv cp fundprice.csv fundpricesout.txt sed 's/,LG/\nLG/g' fundprice.csv > fundpricesout.txt # The above is only needed as for some silly reason I couldn't get # rid of the newline in the file containing the price # It's not pretty, but it works

grep '[0-9]\.[0-9]' fundpricesout.txt > fundprice.csv # This bit cleans out any lines without a price. # It sometime happens if there is a network problem, and I am happy to miss a datapoint # as long as the file is in the right format.

# Mail me the price for the day mail -s "Prices for `date +%Y-%m-%d`" myemail@foo.bar < ~/fundprices.txt

# Tidy up a bit rm fundpricesout.txt rm fundprices.txt

This is set to run as a cron job on weekdays. Monthly, I am emailed the csv file.

Bash Problem, strange new line!

I've been working on a BASH problem. I have this little script which, on a weekday (set by crontab) will run. It will grab the price of a unit trust from the legal and general website, clean up the output and email me the price. I want to create a CSV file of all the prices (the TICKER is one of my own choosing - I've not been able to find the correct ticker which will automatically download the prices to quicken)

(Since I first posted this, I've tweaked the script a little, trying to solve the bug - to no avail - I've replaced the code with the version as of 4th Jan 2008 - the bug is still present - note, ^M is actually ctrl-V ctrl-M, not carat-M)

 # Get the fund prices for L&G tracker and email me daily

# Get the file cd ~ wget -q http://www.legalandgeneral.com/investment/fundprice1_index.jsp --output-document=fundprices.txt

# Find the data I want grep --after-context=10 "UK Index Trust (Acc) (R)" fundprices.txt | sed -e :a -e 's/<[^>]*>//g;/ ~/fundprices.txt

# Trim Excess tabs tr -d '\t' < fundprices.txt > fundpricesout.txt

# I only want the closing price grep -m 1 '[0-9]' ~/fundpricesout.txt > fundprices.txt

# Output the CSV record for easier import to Quicken echo -n "LGTRKFTSE," >>fundprice.csv cat fundprices.txt >> fundprice.csv #BUG - THERE IS A NEW LINE CREEPING IN SOMEWHERE - IT NEEDS REMOVING echo -n "," >> fundprice.csv echo $(date +%d/%m/%Y) >> fundprice.csv

# Strip off the ctrl M characters sed 's/^M//g' fundprice.csv > fundpricesout.txt cp fundpricesout.txt fundprice.csv

# Mail me the price for the day mail -s "Prices for `date +%Y-%m-%d`" myemail@address.foo.bar < ~/fundprices.txt

# Tidy up a bit rm fundpricesout.txt rm fundprices.txt 

The script emails me correctly, but unfortunately, I can't get the CSV to work. The output file looks like this:

LGTRKFTSE,169.6 ,02/01/2008

Where it should look like this:

LGTRKFTSE,169.6,02/01/2008

How can I get rid of that annoying new line character? (I want to keep newlines between the entries for different days)

I'm sure it's something subtle but easy - but it's escaping me.... Anyone know the trick?

Countdown

The Countdown on the homepage of this site (update: no longer used) is produced by using a script, run from the crontab. Thanks to a few folks on a certain irc channel for getting me past some mental blocks.

In cron, there is an entry which reads

*/15 * * * * ~/path/to/countdown ~/path/to/datafile > ~/path/to/outputfile

Countdown is shown below, it is a text file chmodded to 755. The datafile is a simple text file with the format

2006-11-23 :: Event Details 2006-12-14 14:23 :: Other Event 2005-09-23 :: \<a href\=\"http://www.murky.org/\"\>Go and look at Murk's Amazon Wishlist\</a\>

The times are optional, note that anything which BASH may misinterpret should be escaped, i.e. prefixed with /

The resulting file can be put into a webpage using a server side include, or some other means.

The data file will automatically be sorted into date order when the script is run. Please note that the script isn't really set up for repeating events. If anyone modifies the script to do this, I would be pleased to learn of the mod.

At some point, I want to limit it to the next N events only, this should be a simple modification, but I don't have the will right now! The script should not display events more than 24 hours old, nor should it display events more than a year hence (actually, it is a little less than a year)

I would also like to be able to specify when the item should appear, i.e. start to show the item 30 days before etc. The format for this would be:

date :: daysbefore :: Event

As an example:

2006-11-23 :: 30 :: Event Details 2006-12-14 14:23 :: 140 :: Other Event 2005-09-23 ::210 :: \<a href\=\"http://www.murky.org/\"\>Go and look at Murk's Amazon Wishlist\</a\>

This is probably the most desired mod from my point of view.

The repeating event mod would change the format to include R for repeat

2006-11-23 :: 30 R :: Event Details 2006-12-14 14:23 :: 140 :: Other Event 2005-09-23 ::210 R :: \<a href\=\"http://www.murky.org/\"\>Go and look at Murk's Amazon Wishlist\</a\>

Now, when an event is past the line would be deleted if there were no R present, and the year would be modified if there were. I imagine this would involve writing a new file line by line and then copying the new file over the old at the end. It would be up to the user to ensure there was a backup (it would not matter if it were outdated, as when pressed into use it would be modified!)

#!/bin/sh

if [ -z "$1" ] ; then echo "I must have a filename to work with" exit 0 fi

# The Five Hours corrects the discrepancy between my server and me # MODIFY AS REQUIRED, My server is WEST of me uktime=$(date --date='5 hours' +%s) uktime2=$(date --date='5 hours' '+%B %d %H:%M')

sort $1 -o $1 echo "<ul class=\"module-list\">" # open file for reading exec 6<$1 # read until end of file while read -u 6 dta do event=$(echo $dta | sed "s@.*:: @@") whenisit=$(echo $dta | sed "s@::.*@@") optime=$(date --date="$whenisit" +%s) optime=$(($optime-$uktime)) if expr \( $optime \< 0 \) > /dev/null then optime=-$optime hours=$((optime/3600)) if [ $hours \< 24 ] then echo "<li>$event" echo " <span class=\"dateline\">(passed within last 24hrs)</span></li>" fi else year=$((optime/30000000)) # this will suppress any events more than a little less than a year off if expr \( $year \< 1 \) > /dev/null then echo -n "<li>$event <span class=\"dateline\">(" secs=$((optime%60)) mins=$((optime/60)) optime=$mins if expr \( $mins \< 60 \) > /dev/null then echo -n "$mins minutes and $secs seconds." else mins=$((optime%60)) hours=$((optime/60)) optime=$hours if expr \( $hours \< 24 \) > /dev/null then echo -n "$hours hour" if [ $hours != 1 ] then echo -n "s" fi #pluralisation echo -n ", $mins mins" else hours=$((optime%24)) days=$((optime/24)) optime=$days echo -n "$days day" if [ $days != 1 ] then echo -n "s" fi #pluralisation echo -n ", $hours hour" if [ $hours != 1 ] then echo -n "s" fi #pluralisation fi #hours fi #minutes echo ")</span></li>" fi #check that it isn't too far fi done

echo "<li class=\"lastupdate\">lastupdated: $uktime2 UK</li>" echo "</ul>"

# close file test.data exec 6<&-

exit 0

SQL Backups

With thanks to Andy Budd's Page, I have finally worked out how to do decent backup/restores.

I created a text file called sqlbackup, in the file is this:

#!/bin/sh

# echo start message echo "Backup Script Processing"

# navigate to backup dir if cd ~/backup.sql/latest then echo "...successfully navigated to backup dir" else echo "could not locate backup directory" exit 0 fi

# echo message echo "exporting SQL dump"

if #dump the db into a .sql file mysqldump --user=SQLUSERNAME --password=SQLPASSWORD.... .... SQLBLOGNAME --opt | gzip -c > backup-mt.sql.gz; then #echo success message echo "SQL dump successful" ls -la else #echo error message echo "mysqldump error" exit 0 fi

My crontab has:

# Backup MySQL 5 0 * * * /home/murk1e58/sqlbackup >> ~/error.log 2>&1 56 23 * * * cp ~/backup.sql/latest/backup-mt.sql.gz ~/backup.sql/daily/$(date +\%A).sql.gz 58 23 28 * * cp ~/backup.sql/latest/backup-mt.sql.gz ~/backup.sql/monthly/$(date +\%B).sql.gz

(The lines marked .... should be one one line, I have split them here to make sure the line is narrower than most screens). To restore the backup, one types:

gunzip backup.sql.gz mysql --user=username --password=password sqldatabasename < backup.sql

All good stuff!

Of course, what I really need is a completely seperate server, with similar features to this one (cpanel, command line etc), then I could sent the backup to it automatically and have it mirror this server!

In the short term, I would like to work out how to email the resulting file..... any ideas?