Creating iCalendar rules by hand and with a Perl script
+
Dave Morriss
+
Table of Contents
Editor’s Note 2020-01-02
+
The notes for this episode have been reformatted, particularly the long-form notes. This was done to make them more readable. Also, the original Git repository has been changed from Gitorious to GitLab.
+
In 2019 an iCalendar file was placed on the HPR server at http://hackerpublicradio.org/HPR_Community_News_schedule.ics which you can use in your own calendar application. The file contains the recording times of 12 months of Community News shows and is updated monthly.
+
The Problem
+
Back in 2012 Ken Fallon tried to use Google Calendar to set up an event for the recording of the monthly Community News shows on HPR. He wanted to set these on the Saturday before the first Monday of the month. Surprisingly he didn’t find a way to do this and ended up deleting the attempt.
+
I looked at the calendaring application I use: Thunderbird with the Lightning calendar plugin, to see if I could manage it. I also couldn’t find a way.
+
This episode documents my journey to find a way to make the calendar entries we need.
+
Research
+
I was aware that calendars like Google Calendar and many more use iCalendar to represent and communicate events, and so I thought I would try and find out more. I have often wondered how iCalendar calendaring works, so I grabbed a copy of RFC 5545 and absorbed enough to get a vague idea of how it defines recurrent entries. If you’d like to read this yourself have a look at http://www.ietf.org/rfc/rfc5545.txt
+
There are two primary methods of defining recurrent events within iCalendar: RRULE and RDATE. The RRULE property is the more powerful of the two and the more complex. The description of RRULE is long and involved, but in the context of this problem I could see how to define the first Monday of every month:
RRULE:FREQ=MONTHLY;BYDAY=1MO
+
FREQ=MONTHLY simply means the rule repeats every month.
+
BYDAY=1MO then means that every month the first Monday is selected.
+
+
Most calendar applications are well able to deal with this sort of specification, and it seems to be the way in which most recurrent events are defined.
+
Experiment 1
+
However, this is not what we want. We need the Saturday before the first Monday, but the iCalendar syntax doesn’t have any obvious way of subtracting 2 days to get the Saturday before, especially when it could be in the previous month.
+
The definition of the BYDAY rule part specifies a comma separated list of days of the week (MO, TU, WE, TH, FR, SA, SU). As we have seen these weekday specifications can also be preceded by a digit as in 1MO.
+
There is also a rule part BYSETPOS which modifies the BYDAY rule part. It is followed by a comma separated list of values which corresponds to the nth occurrence within the set of events specified by the rule.
+
This led me to believe that I could make a rule as follows:
RRULE:FREQ=MONTHLY;BYDAY=SA,SU,1MO;BYSETPOS=1
+
FREQ=MONTHLY as before means the rule repeats every month.
+
BYDAY=SA,SU,1MO then means that every month the weekend before the first Monday is selected.
+
BYSETPOS=1 means to select the first first day of the group, the Saturday
+
+
I was rather surprised to find that this actually worked, but soon discovered that it has a fatal flaw. If the three days in BYDAY are all in the same month it works fine, but if either the Saturday or Sunday are in the previous month it can’t backtrack far enough and drops the event on the wrong day.
+
Even if this worked, I suspect many calendar applications couldn’t define it anyway. Thunderbird+Lightning cannot for certain. The user interface is just not able to specify this amount of detail.
+
The following is the full iCalendar entry that I plugged into Thunderbird:
However, I discovered there is an alternative way through the RDATE specification. With it you can define a number of events by pre-computing them. I was able to build a test calendar containing the next twelve Community News events (there’s naturally a plug-in for Vim which recognises the syntax!), load it into Thunderbird and make it send out invitations.
+
In true Hacker style I wrote a Perl script1 to generate the necessary RDATE dates. The script uses the Perl module Date::Calc to perform date calculations and Data::ICal to generate iCalendar data.
+
Running the script in the following way:
./make_meeting > experiment2.ics
generates a file containing 12 appointments that can be loaded into Thunderbird (and presumably any other iCalendar-based calendar).
BEGIN:VCALENDAR
+VERSION:2.0
+PRODID:Data::ICal 0.20
+X-WR-CALNAME:Hacker Public Radio
+X-WR-TIMEZONE:Europe/London
+BEGIN:VEVENT
+DESCRIPTION:This is a test\, building an iCalendar file and loading it into
+ Thunderbird.\n-----------------------------------------\nMumble settings\
+ nServer Name: Anything you like\nServer Address: mumble.openspeak.cc \nPor
+ t: 64747\nName: Your name or alias is fine\n\nDon't have mumble\, setup in
+ structions can be found on our wiki -\nhttp://linuxbasix.com/tiki-index.ph
+ p?page=Linux+Basix+Mumble\n
+DTEND:20130803T210000Z
+DTSTART:20130803T190000Z
+LOCATION:mumble.openspeak.cc port: 64747
+RDATE;VALUE=DATE-TIME:20130803T190000Z
+RDATE;VALUE=DATE-TIME:20130831T190000Z
+RDATE;VALUE=DATE-TIME:20131005T190000Z
+RDATE;VALUE=DATE-TIME:20131102T190000Z
+RDATE;VALUE=DATE-TIME:20131130T190000Z
+RDATE;VALUE=DATE-TIME:20140104T190000Z
+RDATE;VALUE=DATE-TIME:20140201T190000Z
+RDATE;VALUE=DATE-TIME:20140301T190000Z
+RDATE;VALUE=DATE-TIME:20140405T190000Z
+RDATE;VALUE=DATE-TIME:20140503T190000Z
+RDATE;VALUE=DATE-TIME:20140531T190000Z
+RDATE;VALUE=DATE-TIME:20140705T190000Z
+SUMMARY:HPR Community News
+END:VEVENT
+END:VCALENDAR
Thunderbird’s event dialog will not let you edit the sub-events, just delete them, but the idea works, albeit in a rather clunky way.
+
I don’t have access to many other calendaring systems, except for Korganizer. It sees the multiple dates as multiple discrete events rather than a single recurring event.
+
Experiment 3
+
Other calendaring systems that do not use iCalendar can handle this problem more effectively. For many years I have used a tool called pcal (http://pcal.sourceforge.net/) that generates PostScript calendars which I print and hang on the wall. It can reliably specify the Saturday before the first Monday of each month with the expression:
Saturday before first Monday in all HPR Community News (19:00 - 21:00)
REM Mon 1 --2 AT 19:00 MSG HPR Community News (19:00 - 21:00)
Remind comes with a tool which can generate iCalendar data, called rem2ics. It expects output from the remind command from which it generates data. The following example generates 12 meetings from the above reminder which is in the file .reminders.
It seems that the iCalendar specification should be able to generate the appointments we need using the compact RRULE specification. However, in the (admittedly small) sample of calendaring applications checked this does not seem to have been implemented properly.
+
Other tools that do not use iCalendar have less difficulty representing such events but are not as widely adopted.
+
If anyone has any ideas about how this problem could be solved more effectively then please let me know!
+#!/usr/bin/perl
+#===============================================================================
+#
+# FILE: make_meeting
+#
+# USAGE: ./make_meeting
+#
+# DESCRIPTION: Makes a recurrent iCalendar meeting to be loaded into
+# a calendar. This is apparently necessary when the 'RRULE'
+# recurrence description is not adequate.
+#
+# OPTIONS: None
+# REQUIREMENTS: Needs modules Data::ICal and Date::Calc
+# BUGS: ---
+# NOTES: Distributed with the HPR episode "iCalendar Hacking"
+# AUTHOR: Dave Morriss (djm), Dave.Morriss@gmail.com
+# LICENCE: Copyright (c) year 2012, Dave Morriss
+# VERSION: 1.0
+# CREATED: 13/10/2012 15:34:01
+# REVISION: 16/11/2012 16:04:37
+#
+#===============================================================================
+# This program is free software: you can redistribute it and/or modify it
+# under the terms of the GNU General Public License as published by the Free
+# Software Foundation, either version 3 of the License, or (at your option)
+# any later version.
+#
+# This program is distributed in the hope that it will be useful, but WITHOUT
+# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+# FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+# more details.
+#
+# You should have received a copy of the GNU General Public License along with
+# this program. If not, see <http://www.gnu.org/licenses/>.
+#===============================================================================
+
+use5.010;
+usestrict;
+usewarnings;
+
+useData::ICal;
+useData::ICal::Entry::Event;
+
+useDate::Calcqw{
+ Today Day_of_Year Add_Delta_YMD Nth_Weekday_of_Month_Year
+};
+
+#
+# Date and time values
+#
+my@today = Today();
+my@startdate;
+my@rdate;
+my$monday = 1;# Day of week number 1-7, Monday-Sunday
+
+my@starttime = (19,00,00);
+my@endtime = (21,00,00);
+
+#
+# Format of an ISO UTC datetime
+#
+my$fmt = "%02d%02d%02dT%02d%02d%02dZ";
+
+#
+# Constants for the event
+#
+my$calname = 'Hacker Public Radio';
+my$timezone = 'Europe/London';
+my$location = 'mumble.openspeak.cc port: 64747';
+my$summary = 'HPR Community News';
+my$description = <<ENDDESC;
+This is a test, building an iCalendar file and loading it into Thunderbird.
+-----------------------------------------
+Mumble settings
+Server Name: Anything you like
+Server Address: mumble.openspeak.cc
+Port: 64747
+Name: Your name or alias is fine
+
+Don't have mumble, setup instructions can be found on our wiki -
+http://linuxbasix.com/tiki-index.php?page=Linux+Basix+Mumble
+ENDDESC
+
+#
+# Compute the next meeting date from now
+#
+@startdate = make_date( \@today,$monday,1,-2);
+
+#
+# Create the calendar object
+#
+my$calendar = Data::ICal->new();
+
+#
+# Some calendar properties
+#
+$calendar->add_properties(
+ 'X-WR-CALNAME'=>$calname,
+ 'X-WR-TIMEZONE'=>$timezone,
+);
+
+#
+# Create the event object
+#
+my$vevent = Data::ICal::Entry::Event->new();
+
+#
+# Add some event properties
+#
+$vevent->add_properties(
+ summary=>$summary,
+ location=>$location,
+ description=>$description,
+ dtstart=>sprintf($fmt,@startdate,@starttime),
+ dtend=>sprintf($fmt,@startdate,@endtime),
+);
+
+#
+# Add 12 recurring dates. (Note that this generates 12 RDATE entries rather
+# than 1 entry with multiple dates; this is because this module doesn't seem
+# to have the ability to generated the concatenated entry. The two modes of
+# expressing the repeated dates seem to be equivalent.)
+#
+formy$i(1 .. 12){
+ @today = Add_Delta_YMD(@today,0,1,0);
+ @rdate = make_date( \@today,$monday,1,-2);
+ $vevent->add_property(rdate=>
+ [sprintf($fmt,@rdate,@starttime),{value=>'DATE-TIME'}],
+ );
+}
+
+#
+# Add the event into the calendar
+#
+$calendar->add_entry($vevent);
+
+#
+# Print the result
+#
+print$calendar->as_string;
+
+exit;
+
+#=== FUNCTION ================================================================
+# NAME: make_date
+# PURPOSE: Make the event date for recurrence
+# PARAMETERS: $refdate
+# An arrayref to the reference date array (usually
+# today's date)
+# $dow Day of week for the event date (1-7, 1=Monday)
+# $n The nth day of the week in the given month required
+# for the event date
+# $offset Number of days to offset the computed date
+# RETURNS: The resulting date as a list for Date::Calc
+# DESCRIPTION: We want to compute a simple date with an offset, such as
+# "the Saturday before the first Monday of the month". We do
+# this my computing a pre-offset date (first Monday of month)
+# then apply the offset (Saturday before).
+# THROWS: No exceptions
+# COMMENTS: TODO Needs more testing to be considered truly universal
+# SEE ALSO:
+#===============================================================================
+sub make_date{
+ my($refdate,$dow,$n,$offset) = @_;
+
+ #
+ # Compute the required date: the nth day of week in this year and month
+ #
+ my@date = Nth_Weekday_of_Month_Year(@$refdate[ 0,1 ],$dow,$n);
+
+ #
+ # If the computed date is before the base date advance a month
+ #
+ if(Day_of_Year(@date) <= Day_of_Year(@$refdate)){
+ #
+ # Add a month and recompute
+ #
+ @date = Add_Delta_YMD(@date,0,1,0);
+ @date = Nth_Weekday_of_Month_Year(@date[ 0,1 ],$dow,$n);
+ }
+
+ #
+ # Apply the day offset
+ #
+ @date = Add_Delta_YMD(@date,0,0,$offset);
+
+ #
+ # Return a list
+ #
+ return(@date);
+}
+
+# vim: syntax=perl:ts=8:sw=4:et:ai:tw=78:fo=tcrqn21:fdm=marker
+
+
+
+
diff --git a/eps/hpr1648/hpr1648_diagram.pdf b/eps/hpr1648/hpr1648_diagram.pdf
new file mode 100755
index 0000000..11468a0
Binary files /dev/null and b/eps/hpr1648/hpr1648_diagram.pdf differ
diff --git a/eps/hpr1648/hpr1648_full_shownotes.html b/eps/hpr1648/hpr1648_full_shownotes.html
new file mode 100755
index 0000000..3686367
--- /dev/null
+++ b/eps/hpr1648/hpr1648_full_shownotes.html
@@ -0,0 +1,240 @@
+
+
+
+
+
+
+
+
+
+
+
+
HPR1648 - full show notes
+
+
Title: Bash parameter manipulation
+
Host: Dave Morriss
+
+
Bash parameter manipulation
+
I'm a great fan of using the Linux command line and enjoy writing shell scripts using the Bash shell.
+
+
BASH (or more usually Bash or bash) is the name of a Unix shell. The name stands for Bourne Again SHell, which is a play on words. Bash is an extension of the shell originally written by Stephen Bourne in 1978, usually known as SH.
+
Bash was written as part of the GNU Project which forms part of the Linux Operating System.
+
A shell is the part of the operating system that interprets commands, more commonly known as the command line.
+
A knowledge of Bash is very helpful if you would like to be able to use the power of the command line. It is also the way to learn how to build Bash scripts for automating the tasks you need to perform.
+
+
In this episode we look at what parameters are in Bash, and how they can be created and manipulated. There are many features in Bash that you can use to do this, but they are not easy to find.
+
As I was learning my way around Bash it took me a while to find these. Once I had found them I wanted to make a "cheat sheet" I could stick on the wall to remind me how to do things. I am sharing the result of this process with you.
+
The version of Bash which I used for this episode is 4.3.30(1)-release
+
What is a parameter?
+
A Bash parameter, more commonly referred to as a variable, is a named item which holds a value. Parameters are created thus:
+
username='droog'
+
There should be no spaces before or after the '=' sign. The parameter called username now contains the string 'droog'.
+
To use the contents of username the name should be prefixed with a dollar ($) sign as in:
+
echo "Your username is $username"
+-> Your username is droog
+
The line beginning '->' is what will be generated by the above statement. I will be using this method of signifying output through these notes.
+
An alternative way of referring to the contents of a parameter is to enclose it with curly brackets (braces) after the dollar sign. This makes the variable name unambiguous when there is a possibility of misinterpretation
+
Arrays
+
As well as the simple parameters seen so far, Bash also provides arrays. The simplest form is indexed by integer numbers, starting with zero and is defined either as a bracketed list:
There is a lot more to arrays in Bash than this, as well as there being associative arrays indexed by strings, but we will leave them for another time.
+
Referring to arrays is achieved using curly braces and indices:
+
echo ${weekdays[4]}
+-> friday
+
The entire array can be referenced with '@' or '*' as an index:
Knowing this much about arrays is necessary to understand the following parameter manipulation expressions.
+
Manipulating parameters
+
If you want to change the value of a parameter there are many ways of doing it. A lot of scripts you may encounter might use sed, awk or cut to do this. For example, you might see this:
+
date='2014-10-27'
+month=$(echo $date | cut -f2 -d'-')
+echo "The month number is $month"
+-> The month number is 10
+
Here cut was used to do the job of extracting the '10' from the contents of date. However, this is inefficient since it causes a whole new process to be run just to do this simple thing. Bash contains the facilities to do this all by itself. Here's how, using substring expansion:
+
month=${date:5:2}
+
Variable month is set to the characters of date from position 5 for 2 characters (the position is zero based by the way).
+
Parameter manipulation features
+
I have demonstrated each of these briefly. Included with these notes are two other resources: the relevant text from the bash man page and some examples in diagrammatic form. Both are PDF files generated from LibreOffice.
+
The man page extract was originally made for my own benefit as a cheat sheet to help to remind me how to use these features. If it benefits you then great. If not then no problem.
+
The diagram was also meant for me to place on a pin-board over my desk, so it includes colour and seems a little more friendly. If you like it then you're welcome.
+
I have also included the examples below, with a little more explanation. I hope it helps.
+
Use default values
+
Returns the value of the parameter unless it's not defined or is null, in which case the default is returned:
+
unset name
+echo ${name:-Undefined}
+-> Undefined
+name="Charlie"
+echo ${name:-Undefined}
+-> Charlie
+
Assign default values
+
Set the value of the parameter if it is undefined or null:
Displays an error and causes the enclosing script to exit if the parameter is not set (the error message contains details of where in the script the problem occurred):
+
echo ${length:?length is unset}
+-> ... length: length is unset
+
Use Alternate Value
+
If the parameter is null or unset then nothing is substituted.
This one is quite complex. The first value is an offset and the second a length. If the length is omitted then the rest of the string is returned.
+
A negative offset means to count backwards from the end of the string. Note that the sign must be preceded by a space to avoid being misinterpreted as the default value form.
+
A negative length is not really a length, but means to return the string between the offset and the position backwards from the end of the string.
+
Sections of arrays may also be indexed with this expression. The offset is an offset into the elements of the array and the length is a count of elements.
+
animal="aardvark"
+echo ${animal:4}
+-> vark
+message="No such file"
+echo ${message:0:7}
+-> No such
+echo ${message: -4}
+-> file
+echo ${message:3:-4}
+-> such
+
+colours=(red orange yellow green blue indigo violet)
+echo ${colours[@]:1:3}
+-> orange yellow green
+echo ${colours[@]:5}
+-> indigo violet
+
Names matching prefix
+
This is for reporting names of variables. We do not show all the names in the examples below.
Lists the array indices (more generally keys) in an array.
+
colours=( red green blue )
+echo ${!colours[@]}
+-> 0 1 2
+
Parameter length
+
Shows the length of a parameter. If used with an array it returns the number of elements.
+
Note the second example below saves the result of the date command in an array dt. There are 6 fields separated by spaces, so the array element count reflects this.
This removes characters from the front of a string. The pattern used can contain an asterisk ('*') meaning an arbitrary number of characters. If two hash characters ('#') are used the longest match is removed.
+
So, in the first examples '*/' with one hash just removes the leading '/', but with two hashes everything up to and including the last '/' is removed. This is equivalent to the built-in basename command.
+
When applied to an array every element can be trimmed as shown in the second example. Note that here we are saving the result of trimming the leading '_' back into the array.
+
dir="/home/dave/some/dir"
+echo ${dir#/home/dave/}
+-> some/dir
+echo ${dir#*/}
+-> home/dave/some/dir
+echo ${dir##*/}
+-> dir
+
+colours=(_red _green _blue)
+colours=(${colours[@]#_})
+echo ${colours[@]}
+-> red green blue
+
Remove matching suffix pattern
+
This feature is similar to the previous one and removes characters from the end of a string. The pattern used to determine what to remove is the same as before, and the use of double '%' characters makes the deletion affect the maximum number of characters.
+
Note that using '/*' in the examples has an effect similar to the dirname command.
+
A common use is shown in the second example where the extension of a filename is deleted and replaced.
This feature permits quite sophisticated changes to be made to a string or an array. The first part after the first '/' is the pattern to match, and can contain '*' as described before. See the Bash Hackers site for a very good explanation of this. The second string is what is to replace the target.
+
If two '/' characters follow the parameter name then all matches that are found are replaced.
+
If the pattern string begins with a '#' then it must match at the start of the parameter, however if the pattern string begins with a '%' then it must match at the end of the parameter.
+
msg='An ant is an ant'
+echo ${msg/ant/insect}
+-> An insect is an ant
+echo ${msg/%ant/insect}
+-> An ant is an insect
+echo ${msg//ant/insect}
+-> An insect is an insect
+
+colours=(red green blue)
+echo ${colours[@]/green/yellow}
+-> red yellow blue
+echo ${colours[@]/#/_}
+-> _red _green _blue
+
Case modification
+
Finally, this feature changes the case of letters. The '^' symbol makes matching letters into uppercase, and ',' converts to lowercase. A single '^' or ',' after the parameter name makes one change, whereas doubling this symbol changes every matching character.
+
Note the use of a pattern enclosed in square brackets matches the enclosed letters. Adding a '^' to the start of this list inverts the matching effect as seen below.
+
msg='the quick brown fox'
+echo ${msg^}
+-> The quick brown fox
+echo ${msg^^}
+-> THE QUICK BROWN FOX
+echo ${msg^^o}
+-> the quick brOwn fOx
+echo ${msg^^[tqbf]}
+-> The Quick Brown Fox
+echo ${msg^^[^tqbf]}
+-> tHE qUICK bROWN fOX
+
+
diff --git a/eps/hpr1648/hpr1648_summary.pdf b/eps/hpr1648/hpr1648_summary.pdf
new file mode 100755
index 0000000..d2d6353
Binary files /dev/null and b/eps/hpr1648/hpr1648_summary.pdf differ
diff --git a/eps/hpr1656/hpr1656_Samsung_YP-01.png b/eps/hpr1656/hpr1656_Samsung_YP-01.png
new file mode 100755
index 0000000..8661be0
Binary files /dev/null and b/eps/hpr1656/hpr1656_Samsung_YP-01.png differ
diff --git a/eps/hpr1656/hpr1656_Samsung_YP-Z5A_1.png b/eps/hpr1656/hpr1656_Samsung_YP-Z5A_1.png
new file mode 100755
index 0000000..94d4a14
Binary files /dev/null and b/eps/hpr1656/hpr1656_Samsung_YP-Z5A_1.png differ
diff --git a/eps/hpr1656/hpr1656_Samsung_YP-Z5A_2.png b/eps/hpr1656/hpr1656_Samsung_YP-Z5A_2.png
new file mode 100755
index 0000000..df36f63
Binary files /dev/null and b/eps/hpr1656/hpr1656_Samsung_YP-Z5A_2.png differ
diff --git a/eps/hpr1656/hpr1656_Sansa_Clip+.png b/eps/hpr1656/hpr1656_Sansa_Clip+.png
new file mode 100755
index 0000000..d0baea1
Binary files /dev/null and b/eps/hpr1656/hpr1656_Sansa_Clip+.png differ
diff --git a/eps/hpr1656/hpr1656_Sansa_Fuze.png b/eps/hpr1656/hpr1656_Sansa_Fuze.png
new file mode 100755
index 0000000..f158a7d
Binary files /dev/null and b/eps/hpr1656/hpr1656_Sansa_Fuze.png differ
diff --git a/eps/hpr1656/hpr1656_full_shownotes.html b/eps/hpr1656/hpr1656_full_shownotes.html
new file mode 100755
index 0000000..2c99c66
--- /dev/null
+++ b/eps/hpr1656/hpr1656_full_shownotes.html
@@ -0,0 +1,364 @@
+
+
+
+
+
+
+
+
+
+
+
+
My Audio Player Collection
+
Introduction
+
I got broadband installed in my house in 2005 after I'd bought my first PC. I'd owned a lot of PCs before that, but they had all been cast-offs from the university I was working at, and I accessed the Internet via dial-up to my work.
+
This was around the time I got sick of listening to the radio and first discovered podcasts, and so I decided I wanted a portable audio player (or MP3 Player as they tended to be called back then).
+
Since then I have been listening to podcasts pretty much all of the time and have worked my way through a number of players. I thought it might be interesting if I chronicled the devices I have owned in the past 9-10 years.
+
Players
+
iRiver iFP-899
+
+
+
+
When purchased
+
2005-03-12
+
+
+
Vendor
+
Amazon UK
+
+
+
Capacity
+
1GB
+
+
+
Cost
+
£119.99
+
+
+
FM Tuner
+
Yes
+
+
+
Microphone?
+
Yes
+
+
+
Rockbox capable?
+
No
+
+
+
+
My first player was the iRiver iFP-899. This was a neat little device with a small monochrome screen and a joystick. It could record through an in-built microphone and played MP3 and WMA. It took a single AA battery and looked like a simple USB storage device when connected to a PC. It had a dedicated lock switch to disable the buttons, and came with a protective cover.
+
It was quite expensive at £120 but came highly recommended. I remember it being discussed on TLLTS which I had started listening to at that time, as well as being Adam (Podfather) Curry's device of choice.
+
Sadly it didn't last more than about a year and a half. The joystick accumulated dust and pocket lint and stopped working. I didn't have the skills to strip it apart and clean it out, so it went into storage (a.k.a. the junk box).
+
Pictures : iRiver iFP-899 front and back
+
Samsung YP-Z5A
+
+
+
+
When purchased
+
2006-09-24
+
+
+
Vendor
+
Amazon UK
+
+
+
Capacity
+
4GB
+
+
+
Cost
+
£99.99
+
+
+
FM Tuner
+
No
+
+
+
Microphone?
+
No
+
+
+
Rockbox capable?
+
No
+
+
+
+
The replacement for the iRiver was the Samsung YP-Z5A. This has a much larger capacity, is a neater shape for the pocket and plays OGG. The lack of a radio and microphone is a disadvantage but the capacity was huge in comparison to the iRiver.
+
I liked the controls on this device, and particularly the existence of a goo solid locking switch.
+
This device has developed a few problems but is still working and is still used occasionally. There has been talk of a Rockbox version but nothing seems to have come of it.
+
Pictures: Samsung YP-Z5A front and back
+
Samsung YP-Q1
+
+
+
+
When purchased
+
2008?
+
+
+
Vendor
+
Amazon UK
+
+
+
Capacity
+
16GB
+
+
+
Cost
+
around £100
+
+
+
FM Tuner
+
Yes
+
+
+
Microphone?
+
Yes
+
+
+
Rockbox capable?
+
No
+
+
+
+
Although this device got some good reviews it proved to be a bad buy. Firstly the controls are extremely sensitive and very difficult to use. Secondly, many of the features are only available if you connect it to the Windows EmoDio software, and third, the only way of accessing it through USB is via MTP.
+
The radio is good and the sound quality is great, and the device can handle MP3, OGG and FLAC. However, all in all this is not a player I would recommend.
+
Interestingly, Amazon lost my purchase details from their database for this period, so I don't have exact details. This bothers me more than it should!
+
Picture: Samsung YP-Q1
+
SanDisk Sansa Fuze
+
+
+
+
When purchased
+
2009-07-02
+
+
+
Vendor
+
Amazon UK
+
+
+
Capacity
+
4GB
+
+
+
Cost
+
£66.75
+
+
+
FM Tuner
+
Yes
+
+
+
Microphone?
+
Yes
+
+
+
Rockbox capable?
+
Yes
+
+
+
+
This is a fantastic player. I bought a hard case for it which has protected it very well. It has a great colour display and the controls are excellent. The only down-side for me is the awkwardness of the locking function - sliding the rather tiny and inaccessible power switch in reverse.
+
I was not too pleased with the native software so this was the first player on which I installed Rockbox. This of course turns it into an even better device capable of playing an amazingly wide range of formats.
+
The 4GB size can be extended with an SDHC card making this a large capacity player. The playlist capabilities of Rockbox make the Fuze even better.
+
The later version of the Fuze (the Fuze+) is apparently not as usable as this one, though I don't have experience of it. The Fuze model can sell for almost the same price as the original on eBay.
+
Picture: Sandisk Sansa Fuze
+
SanDisk Sansa Clip+
+
+
+
+
When purchased
+
2010-11-30
+
+
+
Vendor
+
Amazon UK
+
+
+
Capacity
+
8GB
+
+
+
Cost
+
£38.99
+
+
+
FM Tuner
+
Yes
+
+
+
Microphone?
+
Yes
+
+
+
Rockbox capable?
+
Yes
+
+
+
+
This is another excellent device which I originally bought to use at the gym. The screen is small and without colour but I don't see that as a problem. The controls are great, easy to use and robust. The only issue I have had with it is the fragility of the clip at the back, which broke off after only a few months.
+
The 8GB size can be extended with an SDHC card, similarly to the Fuze.
+
Rockbox can be installed on this device also, and I did this quite soon after buying it. I wish it had a dedicated locking switch, but the device can be locked under Rockbox by pressing Home and Select.
+
I have used this player for recording part of one show for HPR, and I know that Ken Fallon always uses the Clip+ as a backup recording device.
+
Picture: SanDisk Sansa Clip+
+
iRiver H10 5GB
+
+
+
+
When purchased
+
2013-05-28
+
+
+
Vendor
+
eBay UK
+
+
+
Capacity
+
5GB
+
+
+
Cost
+
£17.25
+
+
+
FM Tuner
+
Yes
+
+
+
Microphone?
+
Yes
+
+
+
Rockbox capable?
+
Yes
+
+
+
+
I noticed that the audio players were becoming less popular and more scarce as people used their smartphones for this job, so I wanted to have several players in reserve. I looked at the Rockbox site for compatible players and started hunting for examples of them on eBay.
+
This iRiver was my first find. It is an interesting machine. Quite heavy and fairly large, with a hard disk. It has a removable battery, a colour screen and a locking switch.
+
The one I have emits a faint high-pitched noise when running, so I tend not to use it.
+
Picture: iRiver H10 5GB
+
SanDisk Sansa Clip+
+
+
+
+
When purchased
+
2013-06-02
+
+
+
Vendor
+
eBay UK
+
+
+
Capacity
+
4GB
+
+
+
Cost
+
£17.28
+
+
+
FM Tuner
+
Yes
+
+
+
Microphone?
+
Yes
+
+
+
Rockbox capable?
+
Yes
+
+
+
+
Having had good experiences with the Clip+ before, and noticing their availability on eBay, I bought another one. This has been a great device, well worth the price, even though it is not new.
+
Apple iPod mini 2nd Generation
+
+
+
+
When purchased
+
2013-06-03
+
+
+
Vendor
+
eBay UK
+
+
+
Capacity
+
4GB
+
+
+
Cost
+
£16.46
+
+
+
FM Tuner
+
Yes
+
+
+
Microphone?
+
Yes
+
+
+
Rockbox capable?
+
Yes
+
+
+
+
I wanted to see what this model iPod was like, but having acquired it I find I really dislike this device for reasons I'm not entirely sure about. Rockbox improves it to some degree, but the hardware seems poor in comparison to the Sansa Fuze for example. I would not recommend this player given the availability of the alternatives.
+
Picture: Apple iPod mini 2nd Generation
+
iRiver H10 6GB
+
+
+
+
When purchased
+
2013-06-15
+
+
+
Vendor
+
eBay UK
+
+
+
Capacity
+
6GB
+
+
+
Cost
+
£11.37
+
+
+
FM Tuner
+
Yes
+
+
+
Microphone?
+
Yes
+
+
+
Rockbox capable?
+
Perhaps
+
+
+
+
This device seems identical to the 5GB model. However, on trying to install Rockbox I could not get it to run.
+
Researching this model (rather too late) I found that other people had also had issues with it and Rockbox, so I suspect there may be an issue with this particular configuration.
+
Other purchases
+
Having had such great experiences with the Sansa Fuze and Clip+ I decided to collect a few more as backup devices should the others fail. I bought three Sansa Fuze players and one Sansa Clip+.
+
This turned out to be a bit of a mixed experience. The Clip+ was purchased from Play.com as a manufacturer refurbished player, and has been absolutely superb. However, two of the three Fuze players gave problems. They were all bought from eBay, but I think that two were version 1 hardware and were running firmware versions which begin with '01'. These players seem unreliable and randomly dismount themselves when plugged in to a PC downloading media. The third Fuze seems good, and seems to be a version 2 device with '02' version firmware.
+
I am very glad to have these players. I have an Android smartphone, a fairly recent purchase, but it's too big and heavy for a shirt pocket, and is a lot less convenient than a Sansa Fuze or Clip+. Your mileage may vary of course!
Being a KDE user I quite like a moderate amount of bling, and I particularly like to have a picture on my desktop. I like to rotate my wallpaper pictures every so often, so I want to have a collection of images. To this end I download the APOD on my server every day and make the images available through an NFS-mounted volume.
+
In 2012 I wrote a Perl script to perform the download, using a fairly primitive HTML parsing method. This script has been improved over the intervening years and now uses the Perl module HTML::TreeBuilder which I believe is much better at parsing HTML.
+
The version of the script I use myself also includes the Perl module Image::Magick which interfaces to the awesome ImageMagick image manipulation software suite. I use this to annotate the downloaded image with the title parsed from the HTML so I know what it is.
+
The script I am presenting here is called collect_apod_simple and does not use ImageMagick. I chose to omit it because the installation of this suite and the related Perl module can be difficult. Also, I do not feel that the annotation always works as well as it could, and I have not yet found the time to correct this shortcoming.
+
A version of the more advanced script (called collect_apod) is available in the same place as collect_apod_simple should you wish to give it a try. Both scripts are available on GitLab under the link https://gitlab.com/davmo/hprmisc.
+
The Code
+
If you are acquainted with Perl you'll probably find this script quite simple. All it really does is:
+
+
Get or compute the date string for building the APOD URL
+
Download the HTML on the selected APOD page
+
Look for an image being used as a link
+
Download the image being linked to and save it where requested
+
+
The following is a numbered listing with annotations. There are a several comments in the script itself, but the annotations are there to try and make the various sections as clear as possible.
+
1 #!/usr/bin/env perl
+ 2 #===============================================================================
+ 3 #
+ 4 # FILE: collect_apod_simple
+ 5 #
+ 6 # USAGE: ./collect_apod_simple [YYMMDD]
+ 7 #
+ 8 # DESCRIPTION: Downloads the current Astronomy Picture of the Day or that
+ 9 # relating to the formatted date provided as an argument. In
+ 10 # this context "current" can mean two URLs: .../astropix.html or
+ 11 # .../apYYMMDD.html. We now *do not* download the
+ 12 # .../astropix.html version since it has a different HTML
+ 13 # layout.
+ 14 #
+ 15 # OPTIONS: ---
+ 16 # REQUIREMENTS: ---
+ 17 # BUGS: ---
+ 18 # NOTES: Based on 'collect_apod' but without the Image::Magick stuff,
+ 19 # for simplicity and for release to the HPR community
+ 20 # AUTHOR: Dave Morriss (djm), Dave.Morriss@gmail.com
+ 21 # VERSION: 0.0.1
+ 22 # CREATED: 2015-01-02 19:58:01
+ 23 # REVISION: 2015-01-03 23:00:27
+ 24 #
+ 25 #===============================================================================
+ 26
+ 27 use 5.010;
+ 28 use strict;
+ 29 use warnings;
+ 30 use utf8;
+ 31
+ 32 use LWP::UserAgent;
+ 33 use DateTime;
+ 34 use HTML::TreeBuilder 5 -weak;
+ 35
36 #
+ 37 # Version number (manually incremented)
+ 38 #
+ 39 our $VERSION = '0.0.1';
+ 40
+ 41 #
+ 42 # Set to 0 to be more silent
+ 43 #
+ 44 my $DEBUG = 1;
+ 45
+ 46 #
+ 47 # Script name
+ 48 #
+ 49 ( my $PROG = $0 ) =~ s|.*/||mx;
+ 50
+ 51 #-------------------------------------------------------------------------------
+ 52 # Edit this to your needs
+ 53 #-------------------------------------------------------------------------------
+ 54 #
+ 55 # Where the script will download the picture. Edit this to where you want
+ 56 #
+ 57 my $image_base = "$ENV{HOME}/Backgrounds/apod";
+ 58
+ 59 #-------------------------------------------------------------------------------
+ 60 # Nothing needs editing below here
+ 61 #-------------------------------------------------------------------------------
+ 62
+ 63 #
+ 64 # Get the argument or default it
+ 65 #
+ 66 my $arg = shift;
+ 67 unless ( defined($arg) ) {
+ 68 #
+ 69 # APOD wants a date in YYMMDD format
+ 70 #
+ 71 my $dt = DateTime->now;
+ 72 $arg = sprintf( "%02i%02i%02i",
+ 73 substr( $dt->year, -2 ),
+ 74 $dt->month, $dt->day );
+ 75 }
+ 76
+ 77 #
+ 78 # Check the argument is a valid date in YYMMDD format
+ 79 #
+ 80 die "Usage: $PROG [YYMMDD]\n" unless ( $arg =~ /^\d{6}$/ );
+ 81
+
+
Lines 66-80 collect the date from the command line, or if none is given generate the correctly formatted date. If a date in an invalid format is given the script aborts.
+
+
82 #
+ 83 # Make an URL depending on the argument
+ 84 #
+ 85 my $apod_base = "http://apod.nasa.gov/apod";
+ 86 my $apod_URL = "$apod_base/ap$arg.html";
+ 87
+
+
Lines 85-86 define the APOD URL for the chosen date. This will look like http://apod.nasa.gov/apod/ap150106.html for 2015-01-06 for example.
+
+
88 #
+ 89 # General declarations
+ 90 #
+ 91 my ( $image_URL, $image_file );
+ 92 my ( $tree, $title );
+ 93 my ( $url, $element, $attr, $tag );
+ 94
+ 95 #
+ 96 # Enable Unicode mode
+ 97 #
+ 98 binmode STDOUT, ":encoding(UTF-8)";
+ 99 binmode STDERR, ":encoding(UTF-8)";
+ 100
+ 101 if ($DEBUG) {
+ 102 print "Base URL: $apod_base\n";
+ 103 print "APOD URL: $apod_URL\n";
+ 104 print "Image base: $image_base\n";
+ 105 print "\n";
+ 106 }
+ 107
+ 108 #
+ 109 # Get the HTML page, pretending to be some unknown User Agent
+ 110 #
+ 111 my $ua = LWP::UserAgent->new;
+ 112 $ua->agent("MyApp/0.1");
+ 113
+ 114 my $req = HTTP::Request->new( GET => $apod_URL );
+ 115
+ 116 my $res = $ua->request($req);
+ 117 if ( $res->is_success ) {
+ 118 print "GET request successful\n" if $DEBUG;
+ 119
+ 120 #
+ 121 # Parse the HTML we got back
+ 122 #
+ 123 $tree = HTML::TreeBuilder->new;
+ 124 $tree->parse_content( $res->content_ref );
+ 125
+
+
Lines 111-114 set up and download the APOD web page. If the download was successful then the HTML is parsed with HTML::TreeBuilder in lines 123 and 124.
+
+
126 #
+ 127 # Get and display the title in debug mode
+ 128 #
+ 129 if ($DEBUG) {
+ 130 if ( $title = $tree->look_down( _tag => 'title' ) ) {
+ 131 $title = $title->as_trimmed_text();
+ 132 print "Found title: $title\n" if $title;
+ 133 }
+ 134 }
+ 135
+ 136 #
+ 137 # Look for the image. This is expected to be the href attribute of an <a>
+ 138 # tag. The image we see on the page is merely a link to this (usually)
+ 139 # larger image.
+ 140 #
+ 141 for ( @{ $tree->extract_links('a') } ) {
+ 142 ( $url, $element, $attr, $tag ) = @$_;
+ 143 if ($DEBUG) {
+ 144 print "Found: $url\n" if $url;
+ 145 }
+ 146 last unless defined($url);
+ 147 last if ( $url =~ /\.(jpg|png)$/i );
+ 148 }
+ 149
150 #
+ 151 # Abort if no image (it might be a video or a GIF)
+ 152 #
+ 153 die "Image URL not found\n"
+ 154 unless defined($url)
+ 155 && $url =~ /\.(jpg|png)$/i;
+ 156
+
+
Lines 153-155 check that an image URL was actually found. Some days the APOD site might host a YouTube video or some other animated display. The script is not interested in these since they are no use as wallpaper.
+
+
157 $image_URL = "$apod_base/$url";
+ 158
+ 159 #
+ 160 # Extract the final part of the URL for the file name. We usually get
+ 161 # a JPEG, sometimes with a shouty extension, which we change.
+ 162 #
+ 163 ( $image_file = $image_URL ) =~ s|.*/||mx;
+ 164 ( $image_file = "$image_base/$image_file" ) =~ s/JPG$/jpg/mx;
+ 165
+ 166 if ($DEBUG) {
+ 167 print "Image URL: $image_URL\n";
+ 168 print "Image file: $image_file\n";
+ 169 }
+ 170
+ 171 #
+ 172 # Abort if the file already exists (the script already ran?)
+ 173 #
+ 174 die "File $image_file already exists\n" if ( -f $image_file );
+ 175
+
+
Lines 157-174 prepare the image URL and make a file name to hold the image.
+
+
176 #
+ 177 # Set up the GET request for the image
+ 178 #
+ 179 $req = HTTP::Request->new( GET => $image_URL );
+ 180
+ 181 #
+ 182 # Download the image to the (possibly renamed) image file
+ 183 #
+ 184 $res = $ua->request( $req, $image_file );
+ 185 if ( $res->is_success ) {
+ 186 print "Downloaded to $image_file\n" if $DEBUG;
+ 187 }
+ 188 else {
+ 189 #
+ 190 # The image download failed
+ 191 #
+ 192 die $res->status_line, " ($image_URL)\n";
+ 193 }
+ 194
+
+
Lines 179-193 download the image to a file
+
+
195 }
+ 196 else {
+ 197 #
+ 198 # We failed to get the web page
+ 199 #
+ 200 die $res->status_line, " ($apod_URL)\n";
+ 201 }
+ 202
+ 203 exit;
+ 204
+ 205 # vim: syntax=perl:ts=8:sw=4:et:ai:tw=78:fo=tcrqn21:fdm=marker
+
I hope you find the script interesting and/or useful.
In this episode I want to look at how you move around the file you are editing in Vim. I also want to add some more elements to the configuration file we started building in the last episode.
+
Moving Around
+
One of the powerful features of Vim is the ease with which you can move around a file.
+
Simple movement
+
Some of the basic movements in Normal mode are:
+
+
+
+
Key
+
Action
+
+
+
+
+
l or cursor-right
+
Move right
+
+
+
k or cursor-up
+
Move up
+
+
+
j or cursor-down
+
Move down
+
+
+
h or cursor-left
+
Move left
+
+
+
$ or End key
+
Move to the end of the line
+
+
+
0 or Home key
+
Move to the start of the line
+
+
+
^
+
Move to the first non-blank character of the line
+
+
+
-
+
Move up to first non-blank character
+
+
+
+
+
Move down to first non-blank character
+
+
+
+
Note: In the Vim documentation there is an alternative annotation for these keys (and many others):
+
+
+
+
Vim Annotation
+
Key
+
+
+
+
+
<Up>
+
cursor-up
+
+
+
<Down>
+
cursor-down
+
+
+
<Left>
+
cursor-left
+
+
+
<Right>
+
cursor-right
+
+
+
<Home>
+
home
+
+
+
<End>
+
end
+
+
+
+
We will use this form of annotation in these and future notes. These will also be important when we look at customisation.
+
If a key is used in conjunction with the Shift or Control (CTRL) keys the annotation is shown as <S-Right> (shift + cursor-right) or <C-Right> (CTRL + cursor-right).
+
Some of these motion commands hardly seem different from what is available in other editors. Many presses of the right cursor key will move the cursor to the right a number of columns in most editors. However, Vim allows these keys to be preceded by a number. So, typing:
+
10l
+
will move the cursor 10 characters to the right, as will 10<Right>.
+
The same goes for 10h or 10<Left>, 10k or 10<Up> and so forth.
+
The only movement commands in this group which do not take a count are 0 / <Home> and ^.
+
Word-related movement
+
The next movement commands (used in Normal mode) move the cursor in relation to words in the text. There two definitions of "word" in this context. We will use the Vim convention in these notes and refer to them as word and WORD.
+
These are the definitions from the Vim documentation:
+
word : a sequence of letters, digits and underscores, or a sequence of other non-blank characters, separated with white space (spaces, tabs, end of line). An empty line is also considered to be a word.
+
WORD : a sequence of non-blank characters, separated with white space. An empty line is also considered to be a WORD.
+
+
+
+
Key
+
Action
+
+
+
+
+
w or <S-Right>
+
Move forward to the start of a word
+
+
+
W or <C-Right>
+
Move forward to the start of a WORD
+
+
+
e
+
Move forward to the end of a word
+
+
+
E
+
Move forward to the end of a WORD
+
+
+
b or <S-Left>
+
Move backward to the start of a word
+
+
+
B or <C-Left>
+
Move backward to the start of a WORD
+
+
+
+
These movement commands may be preceded by a numeric count, as before, so 5w or 5<S-Right> will move the cursor forward by 5 words to the start of the 6th word from the current position.
+
The following list shows the effects of various word-related movements moving the cursor along the example log record. It contrasts the use of word versus WORD commands. The ^ characters represent the cursor positions after the various commands. All commands begin moving from the F of FAT. The last two move to the right 80 columns then backwards.
+
FAT-fs (sdh): utf8 is not a recommended IO charset for FAT filesystems, filesystem will be case sensitive!
+ ^
+5w ^
+5W ^
+7e ^
+7E ^
+80l5b ^
+80l5B ^
+
If this is unclear then here are the effects of these commands in text form:
+
+
5w moves forward to the closing bracket
+
5W moves forward to the first lower case 'a'
+
7e moves forward to the '8'
+
7E moves forward to the last 'd' of 'recommended'
+
80l5b moves forward 80 columns to the right to the second 'e' of 'filesystem' then backwards to the 'f' of 'for'
+
80l5B moves forward 80 columns to the right then backwards to the 'c' of 'charset'
+
+
There are many more movement commands which we will look at in forthcoming episodes.
+
+
More configuration settings
+
In the last episode we looked at some of the basic elements of the configuration file. So far the file contains the following:
+
set compatible
+set backup
+set undodir=~/.vim/undodir
+set undofile
+
We can now add some more settings.
+
Adding a ruler
+
Vim will display a ruler at the bottom of the screen if this option is enabled. Add the following to the configuration file:
+
set ruler
+
This causes the line and column number of the cursor position to be shown at the bottom right of the screen, separated by a comma. When there is room, the relative position of the displayed text in the file is shown on the far right.
+
The relative position is Top when the first line of the file is visible, Bot when the last line is visible, All when both top and bottom lines are visible, and if none of the foregoing, N%, the relative position in the file.
+
The command can be abbreviated to se ru. I prefer to use the full form because it is easier to remember what it means!
+
The ruler can also be turned off with set noruler which you would prefix with a colon while in a Vim editing session to enter command mode:
+
:set noruler
+
It is possible to customise the contents of the ruler, but we will not be looking at this for the moment.
+
Note that some Linux distributions set this option for you. I run Debian Testing and a set ruler definition can be found in /usr/share/vim/vim74/debian.vim. It is a good idea to set it in your configuration file regardless, however, because you might need to transfer this file to another distribution in the future,
+
Adding a status line
+
By default the Vim window uses the whole terminal window except for the last line, as we saw in episode 1. The last line is used for displaying various messages, and the ruler, and for entering ":" commands.
+
It is possible to separate the status information from the command entry line with the following option:
+
set laststatus=2
+
This creates an inverse colour status line at the bottom of the screen followed by the command entry line. The status line contains the name of the file being edited, and the ruler (if enabled). The final line contains messages and is where commands are entered.
+
If the terminal you are using is small (like the 24 line by 80 column hardware terminals Vi was originally written for), stealing these lines from the Vim workspace may be a problem. In today's world it's unlikely to be so, and I always enable these.
+
This command can be abbreviated to se ls=2.
+
The status line can also be turned off with set laststatus=0.
+
Showing the mode
+
As we have seen, Vim is a modal editor with several modes, some of which we have yet to look at. By default, Vim does not indicate which mode it is in, but the following command in the configuration file will change this:
+
set showmode
+
As with set ruler, some Linux distributions set this for you, but I believe in setting this myself.
+
With showmode enabled a message such as -- INSERT -- (insert mode) is shown on the last line of the Vim window.
+
This command can be abbreviated to se smd.
+
The mode display can also be turned off with set noshowmode which you would prefix with a colon while in a Vim editing session to enter command mode:
+
:set noshowmode
+
Adding comments
+
The comment character used by Vim in configuration files and elsewhere is the double quote character '"'. See the summary below for an example.
+
Screenshot
+
The following screenshot shows Vim in an xterm window (24x80) editing the notes for this episode (written in enhanced Markdown, to be processed with pandoc). The configuration file used is the same as that shown below in the summary.
+
Picture: Vim with ruler and status line
+
Summary
+
+
Movement
+
+
h, j, k, l or cursor keys
+
$ or <End>
+
0 or <Home>
+
^, - and +
+
w or <S-Right>, W or <C-Right>
+
e, E
+
b or <S-Left>, B or <C-Left>
+
+
Configuration file - this time with comments
+
+
" Ensure Vim runs as Vim
+set nocompatible
+
+" Keep a backup file
+set backup
+
+" Keep change history
+set undodir=~/.vim/undodir
+set undofile
+
+" Show the line,column and the % of buffer
+set ruler
+
+" Always show a status line per window
+set laststatus=2
+
+" Show Insert, Replace or Visual on the last line
+set showmode
+
+
diff --git a/eps/hpr1734/hpr1734_img001.png b/eps/hpr1734/hpr1734_img001.png
new file mode 100755
index 0000000..b0e3313
Binary files /dev/null and b/eps/hpr1734/hpr1734_img001.png differ
diff --git a/eps/hpr1740/hpr1740_full_shownotes.epub b/eps/hpr1740/hpr1740_full_shownotes.epub
new file mode 100755
index 0000000..b01f3c3
Binary files /dev/null and b/eps/hpr1740/hpr1740_full_shownotes.epub differ
diff --git a/eps/hpr1740/hpr1740_full_shownotes.html b/eps/hpr1740/hpr1740_full_shownotes.html
new file mode 100755
index 0000000..33feb20
--- /dev/null
+++ b/eps/hpr1740/hpr1740_full_shownotes.html
@@ -0,0 +1,214 @@
+
+
+
+
+
+
+
+ Mailing List Etiquette (HPR Show 1740)
+
+
+
+
+
+
+
+
+
Mailing List Etiquette (HPR Show 1740)
+
Dave Morriss
+
+
+
+
+
+
+
Table of Contents
+
+
+
Overview
+
In February 2015 I created a script to add a section to the monthly Community News show notes. The added section summarises the discussions on the HPR mailing list over the previous month. My script processes the messages archived on the Gmane site and reports on the threads it finds there.
+
In writing this script I noticed the number of times people made errors in replying to existing message threads and initiating new threads on the list. I thought it might be helpful if I explained some of the do's and don'ts of mailing list use to help avoid these errors.
+
List Etiquette Summary
+
Since this document is long I have included a brief summary here.
+
+
Threads - keep related messages together in time order
+
+
Use "Reply" in your mail client - it knows how to do threading properly
+
+
Use Reply to List or Reply To All - or the list or sender might not get a copy
+
+
Do not change the "Subject" line - make a new thread for a new subject
+
Do not try to start a new thread by replying to an old one - make a new thread for a new subject
+
Do not start a new thread to reply to an existing one - just use "Reply"
+
Do not reply to digest messages - digests are poison for threaded email
+
+
Formatting replies - tidy stuff up before sending it
+
+
Quote the text you are replying to - make it clear who said what
+
Trim the text you are replying to - you know it makes sense
+
+
Don't send back the PGP/GPG signature! - doh!
+
+
Do not top post - backwards reading like correspondents your and you unless
+
Use an email client that can do the right thing! - MS Outlook anyone?
+
+
+
Threads
+
The term thread, meaning a collection of messages relating to a subject, is quite old. It goes back to a time before the Internet. I certainly encountered it in the context of Usenet News before email existed. In current mail systems the term conversation is often used, but it still boils down to a way of ordering messages according to which one is a reply to another.
+
Many mail clients offer a threaded view of mail messages. I have used Thunderbird for many years, and now as a Debian user I use Icedove, the Debian version. Threading is enabled on a per-folder basis and to my mind Icedove does an excellent job.
+
While researching for this episode I found an add-on for Thunderbird called ThreadVis which displays a graphic at the top of a message visualising the thread to which it belongs. I have included an image of what this looks like:
+
Picture: Threads in Icedove with ThreadVis
+
This is the thread from February 12th this year where Ken Fallon forwarded a message to the list from an organisation called Cybrary.
+
Notice how the thread is displayed in the Thunderbird pane. I have enabled threading in the folder and have expanded this thread by clicking on the triangle to the left, and the subject of each message is displayed. There are lines connecting messages, with indentation to show their level.
+
The ThreadVis display also does a nice job of showing the thread in my opinion, and it indicates that there was an external message which began the thread (the message forwarded in the first email from Cybrary), which it represents as a grey box. The messages are represented as coloured circles connected by lines. The lines show the reply relationship between messages and the length of the line represents the time between messages. Each of the messages can be viewed in a pop-up by hovering over the coloured circles, or can be opened by clicking the circle. The various authors in a thread are colour coded.
+
The only slight down-side I have found with ThreadVis is that the Global search and indexer option in Thunderbird has to be on. I had switched this off in the past because it made my Core 2 Duo workstation run slowly. On my current Core i7 with 16Gb RAM it seems to run just fine. ThreadVis uses this index to enable it to thread across all folders, which Thunderbird itself does not do.
+
How Email Threads Work
+
To look into email threads we need to examine the way email works in more detail.
+
The structure of an email message is defined by an Internet specification document known as an RFC (Request For Comments). The particular one covering email is RFC 5322 "Internet Message Format".
+
An email message consists of two parts, the header and the body. To be precise, when the message is in transit it is enclosed in a structure called the envelope, but that is removed upon delivery. We will not go into a lot of detail on the structure of email messages here. There are many sources of this information, such as the Wikipedia article on Email linked to at the end.
+
The message header contains lines known as fields in the format:
+
Name: Value
+
The body part contains the actual message content and can vary in structure from simple text to an arbitrarily complex hierarchy of MIME objects such as HTML, pictures, videos and so on. We will not look at the structure of this part of the message any more here.
+
Some examples of the header fields in a message are:
+
Date: Thu, 12 Feb 2015 15:08:12 +0100
+From: Ken Fallon <ken@fallon.ie>
+To: HPR Hacker Public Radio Mailing List <hpr@hackerpublicradio.org>
+Subject: [Hpr] Fwd: Cross-Promotional Opportunties
+
These are frequently used by the mail client when displaying the message, as can be seen in the picture above.
+
The "Message-ID:" Header Field
+
Mail messages also contain a header field which contains an unique identifier for the particular message. This field is named Message-ID:, and contains a value which looks a little like an email address but is not, for example:
+
Message-ID: <54DCB3CC.3090906@fallon.ie>
+
This message identifier (the value part) is intended to be machine readable and is not necessarily meaningful to humans. The message identifier is intended to be a globally unique identifier for a message.
+
It is not mandatory for this field to be present according to the standards, but without it a lot of the important features of modern email systems fail. Email clients which do not generate a Message-ID: can be regarded as broken I think.
+
The "In-Reply-To:" and "References:" Header Fields
+
When an email client is used to reply to a message it generates header fields which refer back to the ancestors of the message. These fields are named In-Reply-To: and References:.
+
The In-Reply-To: field normally contains a single value which refers to the parent message. It does this by using the Message-ID: value from the parent message.
+
The References: field can contain much of the information required to build a thread. Sadly it cannot be relied on to contain all of the thread information. Normally it will contain the contents of the parent's References: field (if any) followed by the contents of the parent's Message-ID: field.
+
If the parent message does not contain a References: field but does have an In-Reply-To: field containing a single message identifier, then the References: field will contain the contents of the parent's In-Reply-To: field followed by the contents of the parent's Message-ID: field.
+
So the first reply to the above message contains the following fields:
As expected, the In-Reply-To: field contains the contents of the parent message's Message-ID: field. The References: field contains the same, but, perhaps surprisingly, it also contains the contents of the Message-ID: field of the message that was originally forwarded to the mailing list.
+
That is because the first message in the thread contains the following header fields:
This is how ThreadVis was able to show that another message was referenced in the thread. It did this with a grey box to signify that the message was not present, as we saw.
+
So, what is a thread then?
+
You have probably realised from the description so far, an email message thread is defined by these links. Each message points to its ancestors, and if the whole collection of such messages is analysed it is possible to build details of the children of each message as well.
+
While researching this topic I came across Jamie Zawinski's description of his algorithm for analysing a thread as used in early Mozilla products. I found this fascinating since I'd worked out my own algorithm when trying to analyse the messages from the HPR mailing list on Gmane.
+
However, I appreciate that you might be less enthusiastic about this and will leave it here!
+
List Etiquette
+
Leaving the ultra technical stuff behind, let's look at the etiquette subject itself (also referred to as netiquette). There are a several behaviours which constitute good list etiquette. Maintaining thread consistency is one, and the other concerns the citation of the previous message.
+
Threads
+
Use "Reply" in your mail client
+
As we have seen, all you need to do to ensure that your reply on a mailing list is properly threaded is to use your mail client's Reply facility. This will perform the steps necessary to insert the correct headers and all will go along fine.
+
If your mail client can't do this then I'd be fascinated to hear about it. I'd guess you either need to configure things properly or discard it in favour of a properly standards-compliant client.
+
As an aside: you should pay attention to where your reply is going. In the case of the HPR list it is not configured to direct replies to the list. In this situation most mail clients will reply by default to the sender, and this will usually not include the list itself.
+
My email client has a Reply to List function, but that does not send a copy to the sender. It also has a Reply to All option which replies to the list and the sender. I usually use the former since I can usually assume that the sender will get the reply through the list.
+
Do not change the "Subject" line
+
It's usually seen as bad etiquette to change the Subject: field in a thread. Sometimes people will correct a misleading subject, but for clarity this should be done as follows:
+
Subject: Aardvarks
+Subject: The price of beef [was "Re: Aardvarks"]
+
Keeping a reference back to the original subject makes it clear that the change was well considered and (probably) appropriate.
+
Do not try to start a new thread by replying to an old one
+
Sometimes you will see users of a mailing list trying to start a brand-new thread by replying to a message in an old one, and using a new subject. This is bad etiquette and can also be counter-productive in some circumstances.
+
For example, the script that summarises message threads in the HPR Community News show notes will not see such a message as a new thread and will not list it in the summary. See the example in the image above, where an existing thread is used to try and start a new topic with the subject "Intro and Outro". Notice how the summary in the notes for the HPR Community News for February 2015 does not include the topic "Intro and Outro" for this reason.
+
Do not start a new thread to reply to an existing one
+
Another common mistake when intending to join a conversation is to create a brand new message and copy the subject of the relevant thread. As we have seen, this will result in the message not being joined to the thread because the mail client will not be able to generate the necessary headers.
+
However, some thread analysing systems try very hard to get around this problem. The strategy is to look for "orphaned" messages like this with a subject matching an existing thread, then join them into this thread at a position based on the time stamp. The Gmane system does this, as does Jamie Zawinski's system (according to his description). My HPR summary script also does this. However, Thunderbird does not do this when displaying threads.
+
Of course, no algorithm is able to perform a repair like this if the subject line has been altered, so please do not rely on it.
+
Do not reply to digest messages
+
Many mailing list systems provide a digest facility, where all messages in a period such as a day, or a certain number of messages, are bundled up and sent out together, rather than messages being sent individually. This can be a great convenience if the list is very busy or contains newsletters or other "read only" material.
+
Many mailing list systems, including "Mailman" used for the HPR list, are able to generate plain text or MIME digests. The plain text format conforms to RFC1153 which is a good format for human readability, but removes all headers from each message, including those required for threading. The MIME format sends each message as a MIME attachment to a digest message. This format sends the full headers, but since they are embedded in another message, not all mail clients can deal with them.
+
If the list subscribers use it for discussions, receiving digests can be a problem if you ever want to reply to anything. Replying directly to a digest will not result in your reply being part of a thread. The message you are replying to will probably be one of several, and will be encapsulated in the digest message. The digest will not usually convey the identifiers of its constituent messages, and even if it does, most email clients are unable to reply to a message within a message.
+
For a low traffic list like the HPR list it would be better not to subscribe to the digest list. If you do, it would be best not to reply to these messages.
+
Formatting replies
+
When replying to a message it is highly desirable to format the original message and your reply, for reasons of clarity, legibility and economy. See the Wikipedia article on posting style for an in-depth treatment.
+
Many mail clients offer the ability to perform formatting on the original message when replying, and it is recommended that this feature be used wherever available.
+
Quote the text you are replying to
+
It is regarded as bad etiquette not to mark the original text in a reply. The method used most often is to start with a line in a format similar to:
+
On datetime, author wrote:
+
Where date and time are the time stamp for the original message and author is the sender's name and/or email address.
+
The text of the original message then follows marked with the characters ">" (a greater than sign and a space). For example the initial reply might look like this:
+
On 01/01/1505 20:30, Fr. Benedictus wrote:
+> Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod
+> tempor incididunt ut labore et dolore magna aliqua.
+
If a third person then replies to the reply, they should also do the same thing, keeping the original quoted reply, such as:
+
On 01/01/1505 20:34, Fr. Alessandro wrote:
+
+> On 01/01/1505, Fr. Benedictus wrote:
+> > Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do
+> > eiusmod tempor incididunt ut labore et dolore magna aliqua.
+>
+> At vero eos et accusamus et iusto odio dignissimos ducimus qui
+> blanditiis praesentium voluptatum deleniti atque corrupti quos dolores
+
Most, if not all, mail clients will do this, or something similar, for you.
+
Also, many mail clients will make this layout much easier to read by various methods. For example, a Thunderbird add-on can colour the different levels of quotes to make them easier to follow. Another will collapse all quotes, replacing them with buttons which can be clicked to expand the collapsed quoted text.
+
Trim the text you are replying to
+
It is considered bad etiquette to leave the entirety of the original text in the reply. Some degree of trimming is most desirable, but this must be done so as to leave the meaning intact. Someone reading the thread in the future should be able to understand the conversation.
+
Salutations and signatures can and should be removed from the original text.
+
It is important to remove the sender's PGP/GPG signature if there is one. Without doing this mail clients which understand these items will become confused about what is being signed and by whom.
+
It is my experience that clients which are capable of signing and encrypting/decrypting messages will do this removal for you.
+
Do not top post
+
The term "top posting" refers to the practice of placing the reply before the text of the previous message. This is generally regarded as bad etiquette since it reverses the normal flow of conversation and requires the message to be read from the bottom up. In the case where several people have replied to a message, some top posting and others replying beneath, the end result can be almost indecipherable.
+
Most mail clients will offer the facility of positioning your text after the original text, and this feature should be enabled.
+
Some people feel that a top posted reply is more convenient in that they don't have to scroll past all the preceding material to read it. However, using an email client which can collapse and expand quotes is a good compromise here. If all but the last reply is collapsed this shrinks the message down considerably, yet the intermediate text can be consulted if necessary.
+
The screenshot below shows a reply to a message where the previous quoted text has been collapsed. Ironically the hidden message started with a top post!
+
Picture: Message with collapsed quote
+
To be fair, the subject of top posting seems to be controversial and possibly in a state of flux. While preparing this show I found a lengthy discussion of the right way to reply to a mailing list on the Mailman-Users mailing list. You can read it here. There are some interesting points made in this thread, including the fact that the authors of many modern mail clients are now forcing users away from the more normal posting style. I certainly experienced this in my working life when the installation of Microsoft mail products in the organisation changed posting behaviour for the worse.
+
Use an email client that can do the right thing!
+
As you might have noticed if you have read the Wikipedia article on posting style below, some mail clients are not capable of following these guidelines. Microsoft Outlook seems particularly challenged in this area, so if you can, avoid it and other clients like it!
+
+
diff --git a/eps/hpr1740/hpr1740_img001.png b/eps/hpr1740/hpr1740_img001.png
new file mode 100755
index 0000000..31f4502
Binary files /dev/null and b/eps/hpr1740/hpr1740_img001.png differ
diff --git a/eps/hpr1740/hpr1740_img002.png b/eps/hpr1740/hpr1740_img002.png
new file mode 100755
index 0000000..e5022ef
Binary files /dev/null and b/eps/hpr1740/hpr1740_img002.png differ
diff --git a/eps/hpr1757/hpr1757_full_shownotes.epub b/eps/hpr1757/hpr1757_full_shownotes.epub
new file mode 100755
index 0000000..fde0add
Binary files /dev/null and b/eps/hpr1757/hpr1757_full_shownotes.epub differ
diff --git a/eps/hpr1757/hpr1757_full_shownotes.html b/eps/hpr1757/hpr1757_full_shownotes.html
new file mode 100755
index 0000000..15ce266
--- /dev/null
+++ b/eps/hpr1757/hpr1757_full_shownotes.html
@@ -0,0 +1,218 @@
+
+
+
+
+
+
+
+ Useful Bash functions (HPR Show 1757)
+
+
+
+
+
+
+
+
+
+
Useful Bash functions (HPR Show 1757)
+
Dave Morriss
+
+
+
+
+
+
+
Table of Contents
+
+
+
Overview
+
I enjoy writing Bash scripts to solve various problems. In particular I have a number of scripts I use to manage the process of preparing a show for HPR, which I am developing at the moment.
+
My more complex Bash scripts use a lot of functions to perform the various tasks, and, in the nature of things, some of these functions can be of use in other scripts and are shared between them.
+
I thought I would share some of these functions with HPR listeners in the hopes that they might be useful. It would also be interesting to receive feedback on these functions and would be great if other Bash users contributed ideas of their own.
+
Example Functions
+
The following functions are designed to be used in shell scripts. I have a few other functions, some of which I use from the command line, but I will leave discussing them for another time.
+
The way I usually include functions in scripts is to keep them all in a file I call function_lib.sh in the project directory. I then add the following to my script (the variable BASEDIR holds the project directory):
This is a simple function whose purpose is to write formatted lines to the screen. It outputs some text, padded to a chosen length using a chosen character. It adds the padding on the right, left or both sides to make the text centred in the width.
+
The arguments it requires are:
+
+
The text to display
+
The desired length of the padded string (default 80)
+
The character to pad with (default '-')
+
The side the padding is to be added: L, R or C (centre) (default R)
+
+
The function might be called as follows to achieve the results shown:
+
pad 'Title ' 40 '='
+Title ==================================
+
+pad ' Title' 40 '=' L
+================================== Title
+
+pad ' Title ' 40 '=' C
+================ Title =================
+
It can also be used to output a line of 80 hyphens with the call:
+
pad '-'
+
I often use this function to generate lines and headers in reports I display on the terminal.
+
Code
+
1 #=== FUNCTION ================================================================
+ 2 # NAME: pad
+ 3 # DESCRIPTION: Pad $text on the $side with $char characters to length $length
+ 4 # PARAMETERS: 1 - the text string to pad (no default)
+ 5 # 2 - how long the padded string is to be (default 80)
+ 6 # 3 - the character to pad with (default '-')
+ 7 # 4 - the side to pad on, L or R or C for centre (default R)
+ 8 # RETURNS: Nothing
+ 9 #===============================================================================
+ 10 pad () {
+ 11 local text=${1?Usage: pad text [length] [character] [L|R|C]}
+ 12 local length=${2:-80}
+ 13 local char=${3:--}
+ 14 local side=${4:-R}
+ 15 local line l2
+ 16
+ 17 [ ${#text} -ge $length ] && { echo "$text"; return; }
+ 18
+ 19 char=${char:0:1}
+ 20 side=${side^^}
+ 21
+ 22 printf -v line "%*s" $(($length - ${#text})) ' '
+ 23 line=${line// /$char}
+ 24
+ 25 if [[ $side == "R" ]]; then
+ 26 echo "${text}${line}"
+ 27 elif [[ $side == "L" ]]; then
+ 28 echo "${line}${text}"
+ 29 elif [[ $side == "C" ]]; then
+ 30 l2=$((${#line}/2))
+ 31 echo "${line:0:$l2}${text}${line:$l2}"
+ 32 fi
+ 33 }
+
Explanation
+
+
I use a Vim plugin called Bash Support which can generate a standard comment template and function boilerplate, and I have used this here to generate this function and the comment template in lines 1-9.
+
The function starts (lines 11-14) by declaring a number of local variables to hold the arguments. Only one argument, the text to output, is mandatory. The declaration of variable text uses the Bash parameter manipulation feature Display Error if Null or Unset which will abort the function and the calling script with an error message if no value is provided.
+
The other variable declarations supply default values using the Bash feature Use default values.
+
One of the instances where the function needs to take special action is if the supplied text is as long or longer than the total length. The expression on line 17 tests for this, and if found to be true it simply displays the text and returns from the function.
+
Next (lines 19 and 20) the variable char is processed, ensuring it's only one character long, and side is forced to upper-case.
+
The next part (line 22) uses printf to create the padding characters. The -v option to printf writes the result to a variable. The format string just consists of a %s specifier, used for writing a string. The asterisk (*) after the percent sign (%) causes printf to get the width of the string from the argument list.
+
The first argument to printf (after the format string) is the result of an arithmetic expression where the length of the text is subtracted from the desired length. The second argument is a space. So, the printf generates a space-filled string of the required length and stores it in the variable line.
+
The next statement (line 23) replaces all the spaces with the padding character. This uses the Bash feature Pattern substitution.
+
Finally, the function uses an if statement (lines 25-32) to determine how to display the text and the padding. If side is R or L the padding is on the right or left respectively. If it is C then half of the padding is placed on one side and half on the other. Parts of the padding string are selected with the Bash feature Substring Expansion (line 31).
+
+
The function does a good enough job for my needs. It does not deal with the case where the padding character is a space, but that is not a problem as far as I am concerned. It may be a little too simplistic for your tastes.
+
The yes_no function
+
This another simple function which asks a question and waits for a yes/no reply. It returns a true/false result so it can be used thus:
+
if ! yes_no 'Do you want to continue? ' 'No'; then
+ return
+fi
+
It takes two arguments:
+
+
The prompt string
+
An optional default value.
+
+
It returns true (0) if the response is either Y or Yes, regardless of case, and false (1) otherwise.
+
Code
+
1 #=== FUNCTION ================================================================
+ 2 # NAME: yes_no
+ 3 # DESCRIPTION: Read a Yes or No response from STDIN and return a suitable
+ 4 # numeric value
+ 5 # PARAMETERS: 1 - Prompt string for the read
+ 6 # 2 - Default value (optional)
+ 7 # RETURNS: 0 for a response of Y or YES, 1 otherwise
+ 8 #===============================================================================
+ 9 yes_no () {
+ 10 local prompt="${1:?Usage: yes_no prompt [default]}"
+ 11 local default="${2// /}"
+ 12 local ans res
+ 13
+ 14 if [[ -n $default ]]; then
+ 15 default="-i $default"
+ 16 fi
+ 17
+ 18 #
+ 19 # Read and handle CTRL-D (EOF)
+ 20 #
+ 21 read -e $default -p "$prompt" ans
+ 22 res="$?"
+ 23 if [[ $res -ne 0 ]]; then
+ 24 echo "Read aborted"
+ 25 return 1
+ 26 fi
+ 27
+ 28 ans=${ans^^}
+ 29 ans=${ans//[^YESNO]/}
+ 30 if [[ $ans =~ ^Y(E|ES)?$ ]]; then
+ 31 return 0
+ 32 else
+ 33 return 1
+ 34 fi
+ 35 }
+
Explanation
+
+
The function starts (lines 10 and 11) by declaring a number of local variables to hold the arguments. Only one argument, the prompt, is mandatory. The declaration of variable prompt uses the Bash parameter manipulation feature Display Error if Null or Unset which will abort the function and the calling script with an error message if no value is provided.
+
The declaration of the default variable (line 11) copies the second argument and strips any spaces from it. This is because the answers catered for must not contain spaces.
+
If the default variable is not empty (lines 14-16) the string "-i" is prepended to it. This is going to be used as an option in the following read command. Note that the substitution of default should really be enclosed in double quotes if it contained spaces, but since we have stripped them out previously we can do this.
+
Next (line 21) a read command is issued to obtain input from the user.
+
+
The "-e" option ensures that the readline library is used to read the value. This permits line editing in the same way as on the command line.
+
If there is a default value then this is passed through the "-i" option which we have already added to the default variable. If there is no default value then nothing will be substituted here.
+
The "-p" option specifies the prompt string.
+
The result of the read is written to the variable ans.
+
+
The read command returns a true or false result (as do all Bash commands), and this can be found in the special variable "?". This is stored in the local variable res (line 22).
+
If the res variable is not true (0) then the following if statement (lines 23-26) will display "Read aborted" and exit the function with a false result. This false result from the read will result from the user pressing CTRL-D (meaning end of file) to abort the script.
+
The variable ans contains the answer the user typed (or accepted) and this is then processed in various ways (lines 28 and 29). First it is forced to upper case, then any letters other than "YESNO" are removed.
+
Finally, an if statement (lines 30-34) compares ans to the regular expression ^Y(E|ES)?$. This matches if the answer begins with a Y and is optionally followed by an E or by ES. If there is a match the function returns true (0), otherwise it returns false (1).
+
+
This way of doing things means that the reply 'Yup great' is stripped down to YE which is a match. Many other words that reduce to Y, YE or YES like 'Yeast' also match. This might not be a good idea in your particular case.
+
The other aspect of this function you might find slightly undesirable is the way the default is provided. If given, the default value will be on the input line and to override it you will need to delete it (CTRL-W is what I use). I am happy with this but you might not be!
+
+
diff --git a/eps/hpr1757/hpr1757_functions.sh b/eps/hpr1757/hpr1757_functions.sh
new file mode 100755
index 0000000..4cfe1cd
--- /dev/null
+++ b/eps/hpr1757/hpr1757_functions.sh
@@ -0,0 +1,69 @@
+#=== FUNCTION ================================================================
+# NAME: pad
+# DESCRIPTION: Pad $text on the $side with $char characters to length $length
+# PARAMETERS: 1 - the text string to pad (no default)
+# 2 - how long the padded string is to be (default 80)
+# 3 - the character to pad with (default '-')
+# 4 - the side to pad on, L or R or C for centre (default R)
+# RETURNS: Nothing
+#===============================================================================
+pad () {
+ local text=${1?Usage: pad text [length] [character] [L|R|C]}
+ local length=${2:-80}
+ local char=${3:--}
+ local side=${4:-R}
+ local line l2
+
+ [ ${#text} -ge $length ] && { echo "$text"; return; }
+
+ char=${char:0:1}
+ side=${side^^}
+
+ printf -v line "%*s" $(($length - ${#text})) ' '
+ line=${line// /$char}
+
+ if [[ $side == "R" ]]; then
+ echo "${text}${line}"
+ elif [[ $side == "L" ]]; then
+ echo "${line}${text}"
+ elif [[ $side == "C" ]]; then
+ l2=$((${#line}/2))
+ echo "${line:0:$l2}${text}${line:$l2}"
+ fi
+}
+
+#=== FUNCTION ================================================================
+# NAME: yes_no
+# DESCRIPTION: Read a Yes or No response from STDIN and return a suitable
+# numeric value
+# PARAMETERS: 1 - Prompt string for the read
+# 2 - Default value (optional)
+# RETURNS: 0 for a response of Y or YES, 1 otherwise
+#===============================================================================
+yes_no () {
+ local prompt="${1:?Usage: yes_no prompt [default]}"
+ local default="${2// /}"
+ local ans res
+
+ if [[ -n $default ]]; then
+ default="-i $default"
+ fi
+
+ #
+ # Read and handle CTRL-D (EOF)
+ #
+ read -e $default -p "$prompt" ans
+ res="$?"
+ if [[ $res -ne 0 ]]; then
+ echo "Read aborted"
+ return 1
+ fi
+
+ ans=${ans^^}
+ ans=${ans//[^YESNO]/}
+ if [[ $ans =~ ^Y(E|ES)?$ ]]; then
+ return 0
+ else
+ return 1
+ fi
+}
diff --git a/eps/hpr1776/hpr1776_full_shownotes.epub b/eps/hpr1776/hpr1776_full_shownotes.epub
new file mode 100755
index 0000000..a8c2979
Binary files /dev/null and b/eps/hpr1776/hpr1776_full_shownotes.epub differ
diff --git a/eps/hpr1776/hpr1776_full_shownotes.html b/eps/hpr1776/hpr1776_full_shownotes.html
new file mode 100755
index 0000000..ef1ced8
--- /dev/null
+++ b/eps/hpr1776/hpr1776_full_shownotes.html
@@ -0,0 +1,359 @@
+
+
+
+
+
+
+
+ Vim Hints 004 (HPR Show 1776)
+
+
+
+
+
+
+
+
+
Vim Hints 004 (HPR Show 1776)
+
Dave Morriss
+
+
+
+
+
+
+
Table of Contents
+
+
+
More movement commands
+
So far we have seen how to move by character, by word and by line. We saw how Vim has two concepts of word with some demonstrations of what that means.
+
Now we will look at more movement commands.
+
This information can be found in the Vim Help (type :help motion.txt) and is online here. I will be making reference to the Vim documentation in this episode, even though it is very detailed and covers many aspects which we have not yet covered. There is a help page about Vim Help (type :h help) which is online here.
+
Sentences and paragraphs
+
Sentences and paragraphs are referred to as text objects in Vim.
+
A sentence is defined as ending at a '.', '!' or '?' followed by either the end of a line, or by a space or tab.
+
In Normal mode the ) command moves forward one sentence, and ( moves backward. Both commands, like those we have seen before, take a count, so 3) moves forward three sentences.
+
In the context of Vim a paragraph is a group of sentences which begins after each empty line.
+
In Normal mode the } command moves forward one paragraph, and { moves backward. Again both commands take a count, so 2} moves forward two paragraphs.
+
Moving up and down
+
There are many ways of moving up and down in a file. We have already seen two such commands: - and +. Both can be preceded by a number, so 10- moves up ten lines and positions to the first non-blank character and 10+ moves downwards in an equivalent way.
+
The G command will move to a specific line in the file. Typing a G on its own in Normal mode will move to the end of the file. Typing 1G will move to the first line of the file (there is also a gg command that does the same). Otherwise, any number before the G will move to that line, so 42G moves to line 42.
+
The gg command mentioned above can also be used to move to a particular line, so 42gg moves to line 42 in the same way as 42G.
+
Searching
+
Not surprisingly, with Vim you can search the file you are editing. Full information can be found in the Vim Help (type :h pattern.txt) or online here.
+
Searching forward is initiated by typing the / key in Normal mode, and to search backward the ? key is used.
+
When a search is initiated the / or ? character appears in the command line at the bottom of the screen and the next characters you type are the search target. The typing of the target is ended by pressing the <CR> (or <Enter>) key and the search is initiated.
+
The search target can be something quite simple like a sequence of letters and numbers, but it is actually a pattern which can be a regular expression. This is quite a large subject so we will deal with it in more depth in a later episode of this series. For now we will restrict ourselves to the simpler aspects.
+
Typing the following sequence in Normal mode:
+
/the<CR>
+
will result in a search for the characters the and the cursor will be positioned on the next occurrence forward from the current location.
+
Pressing the <Esc> key while typing the search target will abort the search.
+
Once the first occurrence has been found pressing n will move to the next occurrence. This results in forward movement if the search used / and backward movement when using ?.
+
Pressing N causes the search to change direction.
+
Preceding a search by a number causes it to skip to the nth instance of the target. So typing 3 then:
+
/but<CR>
+
will position to the third instance of but from the current cursor position.
+
There are a number of settings that affect the searching process, and some recommended ones are listed and explained below in the Configuration file section. In short they do the following:
+
+
ignore the case of letters when searching except when the target contains a capital letter, when an exact match is searched for
+
start the search as the target is being typed
+
continue the search at the top (or bottom) of the file when the bottom (or top) is reached
+
highlight all the search matches
+
+
Matching pairs
+
Vim can move the cursor between matching pairs of characters such as '(' and ')', '{' and '}' and '[' and ']'. The command that does this in Normal mode is %.
+
If the cursor is placed on the opening character of the pair it will jump to the closing one. If on the closing character it will jump to the opening one. If it is positioned before the opening character of the pair it will jump to the closing one. If it is between the pair it will be positioned to the opening character.
+
The command will also move between the start and end of a C-style comment:
+
/* C-style comment */
+
It is possible to extend the pairs of characters that this command recognises, and there is a Vim plugin available which considerably enhances its functionality. I will leave this subject until later in this series when we look at Vim plugins.
+
Commands that make changes
+
When Vim is editing a file it makes a copy of its contents into a buffer. This is what is displayed and can be manipulated. As we already know, the changes can be discarded with :q! or saved to the file with :w. Changes can also be undone with the u command.
+
The commands in this section perform changes to the buffer.
+
Insert commands
+
These are commands that insert new text into the buffer. They can all be preceded by a count.
+
Full information can be found in the Vim Help (type :h insert.txt) or online here.
+
Appending text
+
The a command appends text after the cursor. Vim enters Insert mode and text will continue to be added until the Escape (<Esc>) key is pressed. If there was a count before the command the insertion is repeated that many times.
+
The A command also appends text but at the end of the line
+
Inserting text
+
The i command inserts text before the cursor. As before Insert mode is ended by pressing the Escape (<Esc>) key. If there was a count before the command the insertion is repeated that many times.
+
The I command inserts text at the start of the line before the first non-blank. The insertion is repeated if a count was present.
+
Vim has an alternative command gI which is like I but inserts the text in column 1.
+
Beginning a new line
+
The o command begins a new line below the cursor and allows text to be entered until Escape (<Esc>) is pressed. The count causes the new line and any text to be repeated.
+
The O command begins a new line above the cursor and allows text to be entered until Escape (<Esc>) is pressed. The count causes the new line and any text to be repeated.
+
Examples of text insertion
+
+
Typing 80i-<Esc> at the start of a blank line will create a line of 80 hyphens.
+
Typing eas<Esc> while on a word will append an s to it
+
Typing 10oHello World will cause 10 lines containing Hello World to be inserted
+
+
Deletion commands
+
Full information can be found in the Vim Help (type :h change.txt) or online here.
+
The x command in Normal mode deletes the character under the cursor. With a count it deletes characters after the cursor. It will not delete beyond the end of the line.
+
The X command in Normal mode deletes the character before the cursor. With a count it will delete that number before the cursor. It will not delete before the start of the line.
+
The dd command deletes lines, one by default, or more if a count was given.
+
The D command deletes from the character under the cursor to the end of the line, and if a count was given, that number minus 1 more full lines.
+
There is a d command as well but we will look at that shortly.
+
Change commands
+
The cc command deletes the number of lines specified by the count (default 1) and enters Insert mode to allow text to be inserted. The <Esc> key ends the insertion as before.
+
The C command deletes from the cursor position to the end of the line, and if a count was given, that number minus 1 more full lines, then enters Insert mode. The <Esc> key ends the insertion as before.
+
There is a c command as well but we will look at that shortly.
+
The s command deletes count characters and enters Insert mode, which is ended with the <Esc> as usual.
+
The S command is a synonym for the cc command described above.
+
Changes and movement
+
At last we can join together the movement commands and some of the commands that change things in Vim. This is where some of the real editing power of Vim resides.
+
Deleting with movement
+
We skipped the d command in the above section because it only really comes into its own in conjunction with motions. This command when followed by a motion command deletes the thing encompassed by the motion.
+
So, for example, dw deletes to the beginning of the next word from the position of the cursor. The table below shows some examples of the operator+movement combinations:
+
+
+
+
Command
+
Action
+
+
+
+
+
dw
+
Delete from the cursor to the start of the next word
+
+
+
de
+
Delete from the cursor to the end of the next word
+
+
+
d$
+
Delete from the cursor to the end of the line (same as D)
+
+
+
d0
+
Delete from before the cursor to the beginning of the line
+
+
+
d)
+
Delete from the cursor to the end of the sentence
+
+
+
+
Changing with movement
+
Similar to the d command we also skipped the c command in the above section.
+
So, for example, cw deletes to the beginning of the next word from the position of the cursor, then enters Insert mode for a replacement to be inserted. The table below shows some examples of the operator+movement combinations:
+
+
+
+
Command
+
Action
+
+
+
+
+
cw
+
Change from the cursor to the start of the next word
+
+
+
ce
+
Change from the cursor to the end of the next word
+
+
+
c$
+
Change from the cursor to the end of the line (same as C)
+
+
+
c0
+
Change from before the cursor to the beginning of the line
+
+
+
c)
+
Change from the cursor to the end of the sentence
+
+
+
+
There are many more ways of deleting and changing text with movement which we will look at in more detail in a future episode.
+
+
Configuration file
+
The story so far
+
In the last episode we extended the configuration file with a ruler and status line. Now we can add some more settings that make Vim more convenient to use.
+
Full information on the options available in Vim can be found in the Vim Help (type :h options.txt) or online here.
+
Stop beeping!
+
Vim has a tendency to beep to alert you to events and errors. This can be a tiny bit annoying, especially in a shared workplace. Instead of an aural alert you can request a visual one with the command:
+
set visualbell
+
The abbreviation is se vb and the inverse is set novisualbell or se novb.
+
Showing incomplete commands
+
As we have seen, Vim commands can consist of sequences of numbers and command letters. For example 23dd means delete 23 lines.
+
The command:
+
set showcmd
+
makes Vim show the command that is being typed. So with 23dd the 23d part will be visible waiting for the final d, after which the display will be cleared and the command actioned.
+
The display of the partial command is shown in the status line at the bottom of the screen.
+
The abbreviation is se sc and the effect can be reversed with set noshowcmd or se nosc.
+
Command history
+
By default Vim will remember the last 50 ':' commands (and the last 50 searches) in history tables. When you press the ':' key or begin a search with '/' or '?' the history table can be traversed with the up and down cursor keys. The size of all of the history tables can be extended with the command such as the following:
+
set history=100
+
The abbreviation for the above command is se hi=100.
+
Ignore case when searching
+
Normally Vim searches for the exact case you provide in your search target. You can switch this off with the command:
+
set ignorecase
+
You might think that this is a little counter-intuitive; I certainly did when I first encountered it. However, in conjunction with the next command:
+
set smartcase
+
it seems more usable. When the smartcase option is enabled Vim will search for both lower and upper case forms when there are only lower case letters in the target, but will search for an exact match when the target is mixed case.
+
The abbreviation for set ignorecase is se ic and for set smartcase is se scs. The options can be reversed with set noignorecase (se noic) and set nosmartcase (se noscs).
+
Searching incrementally
+
While typing a search pattern, Vim can show where that part of the pattern which has been typed so far matches. This feature is enabled with the incsearch option. The matched string is highlighted, but if the pattern is invalid or not found, nothing is shown. In this mode the screen will be updated frequently so it should not be used over a slow link to a remote system!
+
set incsearch
+
The abbreviation is se is and the option is turned off with set noincsearch or se nois.
+
Wrapping the search around
+
When searching Vim normally stops at the end (forward searches) or beginning (reverse searches) of the file. With the wrapscan option searches wrap around.
+
set wrapscan
+
The abbreviation is se ws and the option is turned off with set nowrapscan or se nows.
+
As the search wraps a message is displayed in the status line.
+
Highlighting the search
+
Vim can be configured to highlight all occurrences of the search pattern with the command:
+
set hlsearch
+
The abbreviation is se hls and the option is turned off with set nohlsearch or se nohls.
+
The highlight stays in effect until cancelled, which can get a little tedious, so Vim allows the current pattern match to be turned off with the command :nohlsearch (abbreviated to :nohl).
+
Enable extra features in INSERT mode
+
Vim allows more functionality when in Insert mode than vi. It is possible to work in Insert mode most of the time such as in editors such as Nano. Enabling these features is done with the set backspace option. This is followed by a list of up to three items separated by commas:
+
+
indent - allows backspacing over auto indents (not covered yet in this series)
+
eol - allows backspacing over line breaks (thus permitting inserted lines to be joined)
+
start - allows backspacing over the start of the insert to previously existing text
+
+
To get the full functionality of Vim it is probably wise to use all three items:
+
set backspace=indent,eol,start
+
The abbreviation is se bs=indent,eol,start.
+
+
Summary
+
+
Movement
+
+
), ( move forward and backward by sentences
+
}, { move forward and backward by paragraphs
+
G, gg move to a specific line or beginning or end of file
+
% move between matching pairs of characters
+
+
Searching
+
+
/ to search forward
+
? to search backwards
+
+
Changing
+
+
a and A to append text
+
i and I to insert text
+
o and O to open a new line and insert text
+
x and X to delete characters
+
dd and D to delete lines
+
dmotion to delete up to a movement target
+
s and S to change characters
+
cc and C to change lines
+
cmotion to change up to a movement target
+
+
+
Configuration file
+
" Ensure Vim runs as Vim
+set nocompatible
+
+" Keep a backup file
+set backup
+
+" Keep change history
+set undodir=~/.vim/undodir
+set undofile
+
+" Show the line,column and the % of buffer
+set ruler
+
+" Always show a status line per window
+set laststatus=2
+
+" Show Insert, Replace or Visual on the last line
+set showmode
+
+" Stop beeping! (Flash the screen instead)
+set visualbell
+
+" Show incomplete commands
+set showcmd
+
+" Increase the command history
+set history=100
+
+" Turn off case in searches
+set ignorecase
+
+" Turn case-sensitive searches back on if there are capitals in the target
+set smartcase
+
+" Do incremental searching
+set incsearch
+
+" Set the search scan to wrap around the file
+set wrapscan
+
+" Highlight all matches when searching
+set hlsearch
+
+" Allow extra movement in INSERT mode
+set backspace=indent,eol,start
+
+
diff --git a/eps/hpr1811/hpr1811_full_shownotes.epub b/eps/hpr1811/hpr1811_full_shownotes.epub
new file mode 100755
index 0000000..8afdae0
Binary files /dev/null and b/eps/hpr1811/hpr1811_full_shownotes.epub differ
diff --git a/eps/hpr1811/hpr1811_full_shownotes.html b/eps/hpr1811/hpr1811_full_shownotes.html
new file mode 100755
index 0000000..df1f6e7
--- /dev/null
+++ b/eps/hpr1811/hpr1811_full_shownotes.html
@@ -0,0 +1,197 @@
+
+
+
+
+
+
+
+ Life and Times of a Geek - part 2 (HPR Show 1811)
+
+
+
+
+
+
+
+
+
Life and Times of a Geek - part 2 (HPR Show 1811)
+
Dave Morriss
+
+
+
+
+
+
+
Table of Contents
+
+
+
Introduction
+
In the last part of my story I told you of my first encounter with a mainframe computer and the Algol 60 language while I was an undergraduate student at Aberystwyth University.
+
Today I want to talk about the next stage as a postgraduate student at the University of Manchester.
+
Farewell Aberystwyth
+
I had a wonderful three years in Aberystwyth. It was a beautiful location by the sea with access to all sorts of landscapes and environments; perfect for a Biology student. I could go on at great length about the forays into tidal pools in front of the main University buildings on the sea front, the Welsh woodlands, mountains, salt marshes and bogs we visited. I could tell you about the student who used to bring her pet Jackdaw and her Border Collie to lectures, or the tale of the incredibly fierce rat that my friend and I allowed to escape in the lab, which caused 30 students to jump on tables and chairs. However, I will not. Perhaps another time.
+
Suffice it to say that as we needed to specialise in the last year of study I gravitated towards the area of Animal Behaviour. I did a project on memory in goldfish, training them to perform a task, then using a drug on them to prevent the formation of long-term memory and showing that they had forgotten their task while the control group had not.
+
I obtained a reasonably good Honours degree in Zoology in the summer of 1972 and then had to consider what to do next.
+
I considered developing the programming skills I had acquired in a Biological context, and applied to a few places with this idea in mind. However, nobody wanted a newly graduated Zoologist who had done a little programming, it seemed, or maybe the fact that I didn't really know what to do next was glaringly obvious. I started looking for a possible place to take a postgraduate degree.
+
I was offered a place to study for a PhD in the Animal Behaviour group in the Zoology Department at the University of Manchester. As was normal in those days, I had been awarded a Local Education Authority grant to study for my first degree. However, I could not find funding for my PhD, so I put my studies on hold and went home to try and find a job, with the intention of funding myself for my first year, and seeing what happened after that.
+
A Year Out
+
Back home I found a job by the simple expedient of knocking on the door of a local plastics factory where I had worked before during vacations. I ended up as a labourer, doing shift work, earning about £0.50 per hour. Through this method I managed to accumulate enough to fund myself for my next year.
+
Being an inveterate hoarder I seem to have kept my employment contract, and happened to find it recently while tidying the house.
+
+My contract with United Glass Closures and Plastics Ltd.
+
+
In case you are wondering, the Grinding Department was responsible for chopping up all the waste plastic, melting it down and converting it to pellets so it could be re-used. It was fairly heavy, noisy, boring work, but it achieved the desired goal.
+
University of Manchester
+
In the autumn of 1973 I was in the city of Manchester, at the University of Manchester, one of the largest universities in the UK. I was there to obtain a PhD (Doctor of Philosophy) degree, doing research in Animal Behaviour.
My research topic was to be looking at how animals decide what to eat, where to look for food, how much effort to expend finding and eating it and so on. At the time this area was variously referred to as feeding strategies, optimal foraging and by other names. Later in the decade and into the early 1980's this subject became what is now known as Behavioural Ecology, one of the areas where Mathematical methods (and ideas from Economics) were used to describe and predict animal behaviour.
+
A recent programme in the BBC Radio 4 series "In Our Time" did a fine job of covering this subject and can be heard on the BBC website if you are interested (and if the site is not blocked from outside the UK).
+
Zoology Department
+
I found myself a member of the Zoology Department, which was then a separate entity within the University. It was later incorporated into the School of Biological Sciences, after my time, in 1986. The Zoology Department had been established in 1870 and the rooms and laboratories had an ancient feel about them which I really liked. In 1973 the department was housed in a beautiful old building adjoining the Manchester Museum. Postgraduate students in the department were given keys to the building and these also gave access to the Museum which was linked to the front part of the building. The Museum contained some fascinating exhibits, including a number of live animals.
+
My PhD Supervisor had two other students who were starting at the same time that I was, both doing research in Animal Behaviour. Two of us were using the Barbary Dove (Streptopelia risoria) as our experimental animal and the other one was researching on the Common Marmoset (Callithrix jaccus). Our animals were in the Animal House in the basement of an old building near the Zoology Department.
+
It was usual in those days for postgraduate students in the Department to begin their research projects by carrying out a literature review and writing it up for assessment. In my case this consisted of reading through any of the relevant journals held by the University Library, or for more up to date material, reading a publication called Current Contents which summarised recent publications in peer-reviewed scientific journals. If a paper looked interesting in Current Contents then it could either be obtained by requesting a photocopy through the Inter-Library Loans service or by writing to the author (whose address would be published with details of the paper) to ask for a reprint. Needless to say, this was a slow and laborious process, though the arrival of a new paper was an exciting event.
+
The purpose of doing the literature review was to become highly conversant with the subject and as up to date as possible with published research. This required the keeping of a good collection of references to papers and reprints, and the way to to this in those days was by keeping a filing system. I started by keeping a box file full of hand-written index cards in alphabetical order. There was very little at that time for doing this in any other way.
+
My supervisor introduced us to a slightly more advanced technology in the form of edge-notched cards at this time. These have holes punched all around the edges, which can be notched with a punching tool (we used scissors) to differentiate them from other cards. The principle is that cards relevant to a topic will all be notched in a particular position. They can be extracted from the deck by passing a needle or rod through the relevant hole and lifting out all the cards which are not relevant to a search. Searches can even be combined by using more than one needle or rod.
+
This system was a type of mechanical database, though I have to admit that the sophistication of this method was largely lost on us and we used the simpler methods we had already started with.
+
University of Manchester Regional Computer Centre
+
Across the road from the Zoology Department was the Kilburn Building, which was fairly recently built, having been opened in 1972. This contained the Computer Science Department (later the School of Computer Science) and, on the ground floor, the University of Manchester Regional Computer Centre (UMRCC).
UMRCC was one of the regional computer centres funded by Government to provide high-powered computer facilities for universities in the local region. UMRCC initially provided services for a group of universities which, as well as Manchester itself, included Salford, Liverpool, Keele and Lancaster. The University of London Computer Centre (ULCC) was one of the other such centres.
+
I don't know if there was much in the way of inter-computer networking going on at that time. I would not have had to use it myself being at the heart of things in Manchester, but I think the access to the Regional Centre was via Remote Job Entry (RJE) facilities at the satellite universities. Sadly, I cannot seem to find much information on these facilities now. I will leave further discussion of this subject until a later episode when I speak about finding myself at one of the satellite universities.
+
At the time that I was there, UMRCC had a state of the art CDC 7600 computer from Control Data Corporation, front-ended by an ICL 1906A. The CDC 7600, designed by Seymour Cray, who also designed the Cray-1 later in his career, was considered to be the fastest supercomputer in the world at that point. I think it ran the SCOPE operating system, but I have not found much to support this vague memory. The 1906A ran the GEORGE operating system, either GEORGE 3, or since this model had paging hardware, GEORGE 4 - I don't remember, and I can't find any records any more.
+
Picture: CDC 7600 Attribution: "CDC 7600.jc" by Jitze Couperus - Flickr: Supercomputer - The Middle Ages. Licensed under CC BY 2.0 via Wikimedia Commons
+
As a student I was able to get an account on these systems and soon started learning about them and using them. The main work-horse was the CDC 7600 with the 1906A (quite a powerful computer in its own right at that time) being mainly used as a gateway to the CDC. As before, programs mostly had to be written on coding sheets and punched cards generated, but at UMRCC the users had access to card punches for small amounts of work, making corrections and so forth.
There were also teletypes available to us, connected to the ICL 1906A, but I didn't use these at the start, and I will talk about them later.
+
One of the things that fascinated me about the Computer Centre was the viewing gallery. Access to the ground floor of the building was through a corridor with a glass wall looking into the computer room. In it was all of the hardware, mainframes, tape drives, card readers, line printers and so on. The computer operators wore white coats and could be seen tending to the machines.
+
A pair of rather poor quality videos1,2 are available on YouTube, made at some time in the 1980s when there were two CDC 7600s and the ICL 1906A was replaced by an Amdahl. See the links below. There are several views into the machine room from the viewing gallery in these videos. It looked similarly full of hardware in the early 1970s.
+
The building itself was heated by waste heat from this equipment. I was in Manchester during the Miners' Strike and the Three-Day Week when a lot of electrical equipment was shut off and lights turned out to save power. UMRCC kept going during this time and was heated where other places were not.
+
Programming Languages
+
During this period, I was writing programs in Algol 60 as before, though the compiler available to me was different from the one I had been used to. I also learned Fortran at this time.
+
Fortran
+
Fortran seemed a strange language compared to Algol 60. Statements had a fixed layout, starting in column 7 up to column 72 with columns 73-80 often being used for sequence numbers, to keep the card deck in order. Hardware card sorters were available to sort mis-ordered decks. If a statement had to be continued to a second card then each continuation card needed a character in column 6. Columns 1-5 contained a numeric label used by GOTO statements and others, and if column 1 contained a C that made the card a comment card.
+
See the example of a simple Fortran IV program on the WikiBooks site for what the Fortran of this time looked like.
+
In this example you will see FORMAT statements that define input and output formats. In this program all of these statements are collected at the top, though most people placed them after the WRITE statements they were associated with. As an example consider the following FORMAT with its associated WRITE:
Here the H (Hollerith) format defines a sequence of characters of a predefined width, I (Integer) format defines an integer number and F (Floating point) defines a real or floating point number. It was quite laborious having to count the width of H formats in particular.
+
Also, the first character defined in a FORMAT statement had special significance. These were the days of line-printers and these needed control characters to be sent to them to control the line spacing. The first character output on a line by a Fortran program was a line-printer carriage control character. A space in this position tells the printer to advance to a new line on the output. A zero advances two lines (double space), a 1 advances to the top of a new page and + character will not advance to a new line, allowing overprinting.
+
Failure to remember the use of the first column could result in problems. For example, writing a column of numbers where the first digit was a '1' could result in the printer throwing large numbers of pages with just a single (truncated) number on each. Printing signed numbers starting in column 1 could result in them all overprinting one another as a consequence of the '+'. Neither of these mistakes were very popular with the computer operators!
+
The WRITE statement defines the output unit (6) which will be associated with a device like a file, card punch or line printer, and the format statement which defines the layout of the data. There were various ways in which units were associated with devices. Sometimes the association was by default and other times the job control cards surrounding the program itself defined these associations.
+
One of the great things about Fortran was that were many libraries available for numeric work or to plot results. I had dabbled with using a graph plotter at Aberystwyth and I learned more about how to do this at Manchester. I also made heavy use of the Numerical Algorithms Group (NAG) Library which contained many tools for numerical work like random number generators, statistical methods and matrix manipulation functions.
+
Pascal
+
The language Pascal had also started to become popular around this time. It had been published in 1970 and a CDC version had been developed. Due to its similarity with Algol 60 I wondered if it might be a language I could use.
+
I acquired a copy of the book written by Jensen and Wirth, a strange thing which looked as if it had been generated on a typewriter, with handwritten insertions where there were unusual characters. Sadly I don't seem to have this any more; I must have lent it out and never had it returned or possibly it's lurking in some forgotten corner of my house.
+
Pascal was remarkable at the time because it had data type definitions, records, sets and pointers, which Algol 60 did not. I learnt how to use it and wrote some programs in it but it seemed rather abstract and not very practical compared to Fortran.
+
In its early incarnations Pascal required declarations to be made in a particular order:
+
labels
+constants
+types
+variables
+functions and procedures
+
+
labels are numeric and are the target of goto statements. Pascal users are strongly dissuaded from using goto!
+
constants are identifiers associated with values
+
types allow the programmer to define new data types
+
variables are identifiers which refer to storage ares of various types
+
functions are subroutines that return a value
+
procedures are subroutines that do not return a value
+
+
In Pascal you can define types based on existing types. For example:
+
type
+ byte = 0..255;
+
this type is a sub-range of the standard type integer. This example also demonstrates sub-ranges.
+
Pascal contains set types, which was an innovation at the time it was created. For example, to declare a set capable of holding any lowercase letters:
+
var
+ letterset : set of 'a'..'z';
+
this could then be used for testing, such as:
+
if 'a' in letterset then
+...
+
There were issues with the implementation of such features in the early days however. For example, it was not possible to define sets containing very large numbers of members since they were represented as bits in a byte, word or longword.
+
Pascal also allows the definition of complex data structures called records, such as:
+
type
+ dates = record
+ day : 1..31;
+ month : 1..12;
+ year : 0..9999
+ end;
+var
+ today : dates;
+
There were also issues with these data types in the early days. It was possible in the language to define files containing these items, but many Pascal implementations could not handle them.
+
The Wikipedia article on Pascal contains a good overview of the language if you are interested in investigating further.
+
Pascal became popular for teaching and later became more effective as the language definition changed and more implementations became available.
+
I did not use Pascal much at this time, but later made heavy use of it, as I shall describe in later episodes.
+
+
diff --git a/eps/hpr1811/hpr1811_img001.png b/eps/hpr1811/hpr1811_img001.png
new file mode 100755
index 0000000..ed41fc5
Binary files /dev/null and b/eps/hpr1811/hpr1811_img001.png differ
diff --git a/eps/hpr1811/hpr1811_img002.jpg b/eps/hpr1811/hpr1811_img002.jpg
new file mode 100755
index 0000000..a098bef
Binary files /dev/null and b/eps/hpr1811/hpr1811_img002.jpg differ
diff --git a/eps/hpr1811/hpr1811_img003.jpg b/eps/hpr1811/hpr1811_img003.jpg
new file mode 100755
index 0000000..2abf4d4
Binary files /dev/null and b/eps/hpr1811/hpr1811_img003.jpg differ
diff --git a/eps/hpr1811/hpr1811_img004.jpg b/eps/hpr1811/hpr1811_img004.jpg
new file mode 100755
index 0000000..e0a4251
Binary files /dev/null and b/eps/hpr1811/hpr1811_img004.jpg differ
diff --git a/eps/hpr1811/hpr1811_img005.jpg b/eps/hpr1811/hpr1811_img005.jpg
new file mode 100755
index 0000000..321daec
Binary files /dev/null and b/eps/hpr1811/hpr1811_img005.jpg differ
diff --git a/eps/hpr1822/hpr1822_full_shownotes.html b/eps/hpr1822/hpr1822_full_shownotes.html
new file mode 100755
index 0000000..2a2a4c3
--- /dev/null
+++ b/eps/hpr1822/hpr1822_full_shownotes.html
@@ -0,0 +1,232 @@
+
+
+
+
+
+
+
+ Some tips on using ImageMagick (HPR Show 1822)
+
+
+
+
+
+
+
+
+
Some tips on using ImageMagick (HPR Show 1822)
+
Dave Morriss
+
+
+
+
+
+
+
Table of Contents
+
+
+
I like to use images in HPR shows if I can. I have experimented with various ways of preparing them since I first started contributing, but I'm particularly impressed with what I am able to do using ImageMagick.
+
The ImageMagick system contains an enormous range of capabilities, enough for a whole series of shows. I though I would talk about some of the features I use when preparing episodes to give you a flavour of what can be done.
+
I'm the rawest amateur when it comes to this kind of image manipulation. Just reading some of the ImageMagick documentation (see links) will show you what an enormous number of possibilities there are. I am only using a few in this episode.
+
Processing photographs
+
Stripping EXIF metadata
+
I often take pictures on my digital camera and prefer to remove the EXIF data from them before uploading. Unfortunately ImageMagick doesn't have a feature designed for doing this. It is possible to use:
+
convert -strip before.jpg after.jpg
+
This is not recommended. However, there is another way of doing this using exiftool:
+
exiftool -all= image.jpg
+
This saves the original image by appending _original to the filename.
+
Cropping the image
+
I often want to crop the images I produce because there is extraneous stuff in the edges (due to poor photographic technique mostly). It is possible to use ImageMagick to crop but often I need to use an interactive method.
+
I used to use the GIMP program to do this, but lately I have found that Krita is a little easier.
+
Reducing the image size
+
When preparing an HPR episode I create a directory for all of the files copied off my camera. I often create an images sub-directory drop the pictures there. I use the ImageMagick convert command with the -resize option:
+
convert bigpic.jpg -resize 640 smallpic.png
+
This reduces the picture dimensions to 640 pixels wide which fits the HPR web-page better (the aspect ratio is not changed by this operation). The resulting file is also smaller which helps with upload time and server space. The command uses the extension of the output file to determine the resulting format.
+
Typically the images directory contains the pictures from my camera which have been converted with exiftool. They have the extension .JPG. I run the following command to convert them all:
+
for f in images/*.JPG; do t=${f##*/}; convert $f -resize 640 ${t%.JPG}.png; done
+
This traverses all images. The variable t contains the filename without the directory because I want the new files to be saved in the parent directory. In the convert command the output file is specified with variable t which has had the .JPG stripped from the end and replaced with .png.
+
It's also possible to reduce the image by a percentage rather than trying to reduce to specific dimensions. In this case, the value after -resize would be a percentage such as 50%.
+
I should really write a script to do this stage of image processing but I have not yet done so.
+
Making thumbnails
+
Sometimes, if there are many pictures, I generate thumbnail images for the notes which I can set up to be clickable to get to the bigger image. I would usually make a directory thumbs to hold the thumbnails. The command to make a single thumbnail is:
#!/bin/bash
+#
+# Simple script to generate thumbnail images
+#
+
+#
+# We expect there to be a file called 'manifest' containing the names of the
+# images we want to build thumbnails for.
+#
+if [[ ! -e manifest ]]; then
+ echo "Expected a manifest file, but none exists"
+ exit
+fi
+
+#
+# We might need to create the sub-directory
+#
+if [[ ! -e thumbs ]]; then
+ mkdir thumbs
+fi
+
+#
+# Process the files in the manifest, making thumbnails
+#
+for f in $(cat manifest); do
+ mogrify -format png -path thumbs -thumbnail 100x100 $f
+done
+
+exit
+
You can use the mogrify command to do the whole thing without the script with a command such as the following:
This assumes you want to generate thumbnails for all images in the current directory, but this is not always what I want to do.
+
Doing stuff to thumbnails
+
In an HPR episode I've been creating recently I added a border and a watermark to my thumbnails. I did it this way:
+
#!/bin/bash
+#
+# Simple script to add a number "watermark" and a border to a collection of
+# thumbnails. Run it in the parent directory.
+#
+
+#
+# We expect there to be a file called 'manifest' containing the names of the
+# thumbnails. These are the same names as the main images but in the 'thumbs'
+# directory
+#
+if [[ ! -e manifest ]]; then
+ echo "Expected a manifest file, but none exists"
+ exit 1
+fi
+
+#
+# Check there are thumbnails
+#
+if [[ ! -e thumbs ]]; then
+ echo "No 'thumbs' directory, can't continue"
+ exit 1
+fi
+
+i=1
+
+#
+# Process the thumbnails
+#
+for f in $(cat manifest); do
+ #
+ # Save the original file
+ #
+ o="${f%%.*}_orig.png"
+ mv thumbs/$f thumbs/$o
+
+ #
+ # Convert the original adding a numeric "watermark" and creating the
+ # original name again
+ #
+ convert thumbs/$o -font Courier -pointsize 20 \
+ -draw "gravity center \
+ fill black text 0,12 '$i' \
+ fill white text 1,11 '$i'" \
+ thumbs/$f
+
+ #
+ # Add a border into the same file
+ #
+ convert thumbs/$f -shave 1x1 -bordercolor black -border 1 thumbs/$f
+
+ ((i++))
+done
+
+exit
+
Adding captions to images
+
Also, in an HPR show I'm currently putting together I decided to try adding captions to pictures. I made a file containing the names of the image files followed by the caption.
+
Here's the rough and ready script I made to do this:
+
#!/bin/bash
+
+#
+# Rudimentary script to add captions to images
+#
+
+#
+# The captions are in the file 'captions' so check it exists
+#
+if [[ ! -e captions ]]; then
+ echo "Missing 'captions' file"
+ exit 1
+fi
+
+#
+# Read lines from the captions file (use 'read' to prevent Bash treating
+# spaces as argument delimiters)
+#
+while read l; do
+ #
+ # Split the line into filename and caption on the comma
+ #
+ f=${l%%,*}
+ c=${l##*,}
+
+ #
+ # Save the original file
+ #
+ o="${f%%.*}_orig.png"
+ mv $f $o
+
+ #
+ # Add the caption making the new file have the original name
+ #
+ convert $o -background Khaki label:"$c" -gravity Center -append $f
+
+done < captions
+
+exit
+
The captions file has lines like this:
+
Flours_used.png,Flours used in the demonstration
+Kenwood_Chef.png,Kenwood Chef and accessories
+
This is not elegant or very robust but it did the job. Feel free to develop this further if you want.
+
Joining images together
+
Again, while preparing an HPR show I wanted to do some unusual image manipulation. This time I wanted to shrink two images and join them together side by side to make a final image. The shrinking was no problem, as we have already seen, but I searched for an answer to the join question and found the following:
This tiles the two images on a black background. The -geometry option defines how big the tiles are and how much border space to leave. The special 1x1\< sequence makes ImageMagick find the best fit - it keeps the images the same size as the originals.
+
+
diff --git a/eps/hpr1827/hpr1827_Damp_teatowel_is_best.png b/eps/hpr1827/hpr1827_Damp_teatowel_is_best.png
new file mode 100755
index 0000000..1cbbfc3
Binary files /dev/null and b/eps/hpr1827/hpr1827_Damp_teatowel_is_best.png differ
diff --git a/eps/hpr1827/hpr1827_Damp_teatowel_is_best_tn.png b/eps/hpr1827/hpr1827_Damp_teatowel_is_best_tn.png
new file mode 100755
index 0000000..b4b46ab
Binary files /dev/null and b/eps/hpr1827/hpr1827_Damp_teatowel_is_best_tn.png differ
diff --git a/eps/hpr1827/hpr1827_Dividing_the_dough.png b/eps/hpr1827/hpr1827_Dividing_the_dough.png
new file mode 100755
index 0000000..39eaaea
Binary files /dev/null and b/eps/hpr1827/hpr1827_Dividing_the_dough.png differ
diff --git a/eps/hpr1827/hpr1827_Dividing_the_dough_tn.png b/eps/hpr1827/hpr1827_Dividing_the_dough_tn.png
new file mode 100755
index 0000000..2748340
Binary files /dev/null and b/eps/hpr1827/hpr1827_Dividing_the_dough_tn.png differ
diff --git a/eps/hpr1827/hpr1827_Dough_being_kneaded.png b/eps/hpr1827/hpr1827_Dough_being_kneaded.png
new file mode 100755
index 0000000..2194c07
Binary files /dev/null and b/eps/hpr1827/hpr1827_Dough_being_kneaded.png differ
diff --git a/eps/hpr1827/hpr1827_Dough_being_kneaded_tn.png b/eps/hpr1827/hpr1827_Dough_being_kneaded_tn.png
new file mode 100755
index 0000000..6727192
Binary files /dev/null and b/eps/hpr1827/hpr1827_Dough_being_kneaded_tn.png differ
diff --git a/eps/hpr1827/hpr1827_Dough_is_now_pliable.png b/eps/hpr1827/hpr1827_Dough_is_now_pliable.png
new file mode 100755
index 0000000..85d0ae7
Binary files /dev/null and b/eps/hpr1827/hpr1827_Dough_is_now_pliable.png differ
diff --git a/eps/hpr1827/hpr1827_Dough_is_now_pliable_tn.png b/eps/hpr1827/hpr1827_Dough_is_now_pliable_tn.png
new file mode 100755
index 0000000..757d222
Binary files /dev/null and b/eps/hpr1827/hpr1827_Dough_is_now_pliable_tn.png differ
diff --git a/eps/hpr1827/hpr1827_Dough_mixed_in_mixer.png b/eps/hpr1827/hpr1827_Dough_mixed_in_mixer.png
new file mode 100755
index 0000000..38bddf0
Binary files /dev/null and b/eps/hpr1827/hpr1827_Dough_mixed_in_mixer.png differ
diff --git a/eps/hpr1827/hpr1827_Dough_mixed_in_mixer_tn.png b/eps/hpr1827/hpr1827_Dough_mixed_in_mixer_tn.png
new file mode 100755
index 0000000..09b3a4a
Binary files /dev/null and b/eps/hpr1827/hpr1827_Dough_mixed_in_mixer_tn.png differ
diff --git a/eps/hpr1827/hpr1827_Dough_pressed_down.png b/eps/hpr1827/hpr1827_Dough_pressed_down.png
new file mode 100755
index 0000000..072ad26
Binary files /dev/null and b/eps/hpr1827/hpr1827_Dough_pressed_down.png differ
diff --git a/eps/hpr1827/hpr1827_Dough_pressed_down_tn.png b/eps/hpr1827/hpr1827_Dough_pressed_down_tn.png
new file mode 100755
index 0000000..a418126
Binary files /dev/null and b/eps/hpr1827/hpr1827_Dough_pressed_down_tn.png differ
diff --git a/eps/hpr1827/hpr1827_Dough_ready_to_rise.png b/eps/hpr1827/hpr1827_Dough_ready_to_rise.png
new file mode 100755
index 0000000..213b68d
Binary files /dev/null and b/eps/hpr1827/hpr1827_Dough_ready_to_rise.png differ
diff --git a/eps/hpr1827/hpr1827_Dough_ready_to_rise_tn.png b/eps/hpr1827/hpr1827_Dough_ready_to_rise_tn.png
new file mode 100755
index 0000000..412ca3e
Binary files /dev/null and b/eps/hpr1827/hpr1827_Dough_ready_to_rise_tn.png differ
diff --git a/eps/hpr1827/hpr1827_Dried_yeast_activating.png b/eps/hpr1827/hpr1827_Dried_yeast_activating.png
new file mode 100755
index 0000000..09ff573
Binary files /dev/null and b/eps/hpr1827/hpr1827_Dried_yeast_activating.png differ
diff --git a/eps/hpr1827/hpr1827_Dried_yeast_activating_tn.png b/eps/hpr1827/hpr1827_Dried_yeast_activating_tn.png
new file mode 100755
index 0000000..2f4e42e
Binary files /dev/null and b/eps/hpr1827/hpr1827_Dried_yeast_activating_tn.png differ
diff --git a/eps/hpr1827/hpr1827_Flour_and_salt.png b/eps/hpr1827/hpr1827_Flour_and_salt.png
new file mode 100755
index 0000000..108cfd5
Binary files /dev/null and b/eps/hpr1827/hpr1827_Flour_and_salt.png differ
diff --git a/eps/hpr1827/hpr1827_Flour_and_salt_tn.png b/eps/hpr1827/hpr1827_Flour_and_salt_tn.png
new file mode 100755
index 0000000..fe92e15
Binary files /dev/null and b/eps/hpr1827/hpr1827_Flour_and_salt_tn.png differ
diff --git a/eps/hpr1827/hpr1827_Flours_used.png b/eps/hpr1827/hpr1827_Flours_used.png
new file mode 100755
index 0000000..05673c0
Binary files /dev/null and b/eps/hpr1827/hpr1827_Flours_used.png differ
diff --git a/eps/hpr1827/hpr1827_Flours_used_tn.png b/eps/hpr1827/hpr1827_Flours_used_tn.png
new file mode 100755
index 0000000..82353d1
Binary files /dev/null and b/eps/hpr1827/hpr1827_Flours_used_tn.png differ
diff --git a/eps/hpr1827/hpr1827_In_greased_loaf_tins.png b/eps/hpr1827/hpr1827_In_greased_loaf_tins.png
new file mode 100755
index 0000000..cda35d8
Binary files /dev/null and b/eps/hpr1827/hpr1827_In_greased_loaf_tins.png differ
diff --git a/eps/hpr1827/hpr1827_In_greased_loaf_tins_tn.png b/eps/hpr1827/hpr1827_In_greased_loaf_tins_tn.png
new file mode 100755
index 0000000..5e4779a
Binary files /dev/null and b/eps/hpr1827/hpr1827_In_greased_loaf_tins_tn.png differ
diff --git a/eps/hpr1827/hpr1827_Ingredients_mixing.png b/eps/hpr1827/hpr1827_Ingredients_mixing.png
new file mode 100755
index 0000000..56c2105
Binary files /dev/null and b/eps/hpr1827/hpr1827_Ingredients_mixing.png differ
diff --git a/eps/hpr1827/hpr1827_Ingredients_mixing_tn.png b/eps/hpr1827/hpr1827_Ingredients_mixing_tn.png
new file mode 100755
index 0000000..3a593e9
Binary files /dev/null and b/eps/hpr1827/hpr1827_Ingredients_mixing_tn.png differ
diff --git a/eps/hpr1827/hpr1827_Kenwood_Chef.png b/eps/hpr1827/hpr1827_Kenwood_Chef.png
new file mode 100755
index 0000000..a5e18cd
Binary files /dev/null and b/eps/hpr1827/hpr1827_Kenwood_Chef.png differ
diff --git a/eps/hpr1827/hpr1827_Kenwood_Chef_tn.png b/eps/hpr1827/hpr1827_Kenwood_Chef_tn.png
new file mode 100755
index 0000000..bb38c69
Binary files /dev/null and b/eps/hpr1827/hpr1827_Kenwood_Chef_tn.png differ
diff --git a/eps/hpr1827/hpr1827_Knocking_back.png b/eps/hpr1827/hpr1827_Knocking_back.png
new file mode 100755
index 0000000..64bfcd4
Binary files /dev/null and b/eps/hpr1827/hpr1827_Knocking_back.png differ
diff --git a/eps/hpr1827/hpr1827_Knocking_back_tn.png b/eps/hpr1827/hpr1827_Knocking_back_tn.png
new file mode 100755
index 0000000..a1393fa
Binary files /dev/null and b/eps/hpr1827/hpr1827_Knocking_back_tn.png differ
diff --git a/eps/hpr1827/hpr1827_Loaves_baked.png b/eps/hpr1827/hpr1827_Loaves_baked.png
new file mode 100755
index 0000000..6660b01
Binary files /dev/null and b/eps/hpr1827/hpr1827_Loaves_baked.png differ
diff --git a/eps/hpr1827/hpr1827_Loaves_baked_tn.png b/eps/hpr1827/hpr1827_Loaves_baked_tn.png
new file mode 100755
index 0000000..c036b20
Binary files /dev/null and b/eps/hpr1827/hpr1827_Loaves_baked_tn.png differ
diff --git a/eps/hpr1827/hpr1827_Loaves_cooling.png b/eps/hpr1827/hpr1827_Loaves_cooling.png
new file mode 100755
index 0000000..4492404
Binary files /dev/null and b/eps/hpr1827/hpr1827_Loaves_cooling.png differ
diff --git a/eps/hpr1827/hpr1827_Loaves_cooling_tn.png b/eps/hpr1827/hpr1827_Loaves_cooling_tn.png
new file mode 100755
index 0000000..0555cfd
Binary files /dev/null and b/eps/hpr1827/hpr1827_Loaves_cooling_tn.png differ
diff --git a/eps/hpr1827/hpr1827_Ready_for_tins.png b/eps/hpr1827/hpr1827_Ready_for_tins.png
new file mode 100755
index 0000000..afc5e1a
Binary files /dev/null and b/eps/hpr1827/hpr1827_Ready_for_tins.png differ
diff --git a/eps/hpr1827/hpr1827_Ready_for_tins_tn.png b/eps/hpr1827/hpr1827_Ready_for_tins_tn.png
new file mode 100755
index 0000000..ce92e6e
Binary files /dev/null and b/eps/hpr1827/hpr1827_Ready_for_tins_tn.png differ
diff --git a/eps/hpr1827/hpr1827_Risen_dough.png b/eps/hpr1827/hpr1827_Risen_dough.png
new file mode 100755
index 0000000..78962e1
Binary files /dev/null and b/eps/hpr1827/hpr1827_Risen_dough.png differ
diff --git a/eps/hpr1827/hpr1827_Risen_dough_in_tins.png b/eps/hpr1827/hpr1827_Risen_dough_in_tins.png
new file mode 100755
index 0000000..7bf46e0
Binary files /dev/null and b/eps/hpr1827/hpr1827_Risen_dough_in_tins.png differ
diff --git a/eps/hpr1827/hpr1827_Risen_dough_in_tins_tn.png b/eps/hpr1827/hpr1827_Risen_dough_in_tins_tn.png
new file mode 100755
index 0000000..81a5e8b
Binary files /dev/null and b/eps/hpr1827/hpr1827_Risen_dough_in_tins_tn.png differ
diff --git a/eps/hpr1827/hpr1827_Risen_dough_tn.png b/eps/hpr1827/hpr1827_Risen_dough_tn.png
new file mode 100755
index 0000000..c78bcdb
Binary files /dev/null and b/eps/hpr1827/hpr1827_Risen_dough_tn.png differ
diff --git a/eps/hpr1827/hpr1827_Rising_in_tins.png b/eps/hpr1827/hpr1827_Rising_in_tins.png
new file mode 100755
index 0000000..acb031d
Binary files /dev/null and b/eps/hpr1827/hpr1827_Rising_in_tins.png differ
diff --git a/eps/hpr1827/hpr1827_Rising_in_tins_tn.png b/eps/hpr1827/hpr1827_Rising_in_tins_tn.png
new file mode 100755
index 0000000..d4195f6
Binary files /dev/null and b/eps/hpr1827/hpr1827_Rising_in_tins_tn.png differ
diff --git a/eps/hpr1827/hpr1827_Sliced_loaf.png b/eps/hpr1827/hpr1827_Sliced_loaf.png
new file mode 100755
index 0000000..383ed70
Binary files /dev/null and b/eps/hpr1827/hpr1827_Sliced_loaf.png differ
diff --git a/eps/hpr1827/hpr1827_Sliced_loaf_tn.png b/eps/hpr1827/hpr1827_Sliced_loaf_tn.png
new file mode 100755
index 0000000..2ee23d9
Binary files /dev/null and b/eps/hpr1827/hpr1827_Sliced_loaf_tn.png differ
diff --git a/eps/hpr1827/hpr1827_Wholemeal_Bread_Recipe.pdf b/eps/hpr1827/hpr1827_Wholemeal_Bread_Recipe.pdf
new file mode 100755
index 0000000..f659281
Binary files /dev/null and b/eps/hpr1827/hpr1827_Wholemeal_Bread_Recipe.pdf differ
diff --git a/eps/hpr1827/hpr1827_full_shownotes.html b/eps/hpr1827/hpr1827_full_shownotes.html
new file mode 100755
index 0000000..45abf86
--- /dev/null
+++ b/eps/hpr1827/hpr1827_full_shownotes.html
@@ -0,0 +1,100 @@
+
+
+
+
+
+
+
+ How I make bread (HPR Show 1827)
+
+
+
+
+
+
+
+
+
How I make bread (HPR Show 1827)
+
Dave Morriss
+
+
+
+
+
+
+
Table of Contents
+
+
+
Introduction
+
Ken Fallon was asking for bread-making advice on a recent Community News recording. I've been making my own bread since the 1970's and I thought I'd share my methods in response. Frank Bell also did an excellent bread-making episode in 2013.
+
I remember my mother having a go at making bread when I was a kid. The result smelled lovely but the bread hadn't risen much; it had an unfortunate resemblance to a brick in shape, if I remember right, but it was delicious nevertheless. After that I always wanted to try making my own.
+
I bought a Kenwood Chef mixer when I lived in Lancaster in the 1970's and it came with some bread recipes, which I tried. I found I had a fair degree of success in the first instance, though there were a number of failures. I kept experimenting and got better at it, and graduated to making loaves of various types, rolls, bagels, pitta bread, pizza bases, and so on.
+
I have been making my own bread ever since. I continued with the method of using the Kenwood Chef and I'm currently on my third one. Eventually the gear box breaks I have found. Making bread, especially large loaves, works the device very hard.
+
As life got busier I bought a bread-maker to simplify the process, and am currently on my third one of these a Panasonic SD255 (now discontinued). I mainly use this device because it is so simple to prepare the ingredients and then leave the bread to mix, rise and bake.
+
Recipe
+
For this episode I baked two 1lb loaves according to an old recipe I have been using for many years. It is based on one which came with the Kenwood Chef and uses wholemeal flour. I have included a PDF copy of this recipe, see the links below.
+
Flour
+
Flours used for this episode (click the thumbnail for the full image)
+
For these loaves I used a mixture of strong plain wholemeal flour and strong plain white. Probably used about 80% wholemeal, 20% white.
+
Names of flours vary between countries; strong plain, the term used in the UK, means a high-gluten flour, often from a winter wheat variety, without raising agent. High-gluten wheats are sometimes referred to as hard. The gluten makes a more elastic dough which rises better and makes a more chewy bread.
+
Food Mixer
+
Kenwood Chef
+
The picture shows my mixer, which I have had for around 20 years. This model is not made any more. I have a number of attachments for it including the ones pictured, as well as a coffee grinder and a wheat mill.
+
Preparation
+
Flour and salt. Activating the dried yeast
+
I normally used this dried yeast when making bread this way. It needs to be mixed into warm water with some sugar to feed it. This combination makes it begin to froth quite quickly, especially on a warm day.
+
I have used fresh yeast when I can get it, but it's not sold anywhere nearby any more. Fresh yeast also needs to be activated before mixing.
+
Mixing and kneading
+
Making dough
+
I find the ingredients (water+sugar, yeast, flour, salt and oil) mix together without problems in the food mixer and soon start to form a dough. I let the mixer work the dough for the recommended 3 minutes and get an end product that you can see in picture 6. I like to finish off the kneading by hand to make sure the dough is as elastic as it can be.
+
Rising
+
Leaving the dough to rise
+
You can cover the bowl with film but unless the film has been greased the dough can stick to it. I forgot that at the start, and transferred to a damp teatowel once I had remembered.
+
Knocking back
+
Second kneading
+
I use the food mixer at this stage, though kneading by hand would work just as well.
+
Second rise
+
Dividing, placing in tins and leaving to rise again
+
I divide the dough with the tool in picture 13 which is a dough scraper for use with the wet doughs produced by some recipes. I'm using some rather old bread tins described as 1lb loaf tins. Make sure to grease them well before placing the dough inside or it will be hard work to extract the loaves.
+
I pressed the dough into the tins this time, but it shouldn't really be necessary and the end result will be better if you don't. Leave covered to rise as before.
+
Baking the bread
+
Once risen, bake the bread, leave to cool and voila! fresh bread
+
I like to slice up my bread after it's cooled, then I freeze it. That way I can take out individual slices and toast them from frozen or allow them to thaw and use them.
+
Conclusion
+
My favourite recipe at the moment uses a Wholemeal, Rye and Spelt flour mixture with sunflower seeds. This makes quite a heavy bread which is absolutely wonderful toasted. I do cheat a little though, and make this in my bread-maker!
+
Flours like Rye are low in gluten, so breads made with them do not rise as well. I have also been experimenting with Buckwheat flour, which I don't think has very much gluten either, and I mix this (and rye flour) with other flours to get a reasonable dough.
+
My son, who used to help with the bread-making as a little boy, makes a very good sourdough loaf and often asks me to look after his sourdough starter when he's away on holiday. I haven't had great success with sourdough and need to get some lessons from him.
If you are a command-line user (and personally, I think you should be prepared to use the command line since it's so powerful) it's likely that you have used the cd command to change directory.
+
There are other directory movement commands within Bash: pushd, popd and dirs. I'm going to describe these today.
+
Basic Usage
+
pushd dir
+
This command changes directory like cd but it does more, it saves the previous directory, and the directory you've moved to, in a stack.
+
So, assume you have logged in and are in your home directory:
pwd shows the directory I'm in. The pushd Documents takes me to my Documents directory. The stack is shown as a list, with the top of the stack on the left, and with ~ denoting the top-level home directory.
+
popd
+
This command moves back to the previous directory. To be more precise, it takes the top-most (current) directory off the stack and changes directory to the new top.
Note that the order of the stack is not the same as it would have been if I had visited the directories in the same order.
+
You can manipulate the stack with pushd but the order is fixed, you can only rotate it (as if it's a loop) so that a particular directory is on top. This is done by using the option +n or -n (where n is an integer number not the letter 'n'). So pushd +2 means to rotate the stack so that entry number 2 counting from the left (or as numbered by dirs -v) is raised to the top. Everything else rotates appropriately:
Note that popd -n removed element 1 from the stack whereas plain popd removed the zeroth element.
+
As with pushd options of the form +n and -n are catered for (where n is an integer number not the letter 'n'). In this case popd +3 means to remove directory 3 from the stack, counting from the left, whereas popd -2 counts from the right.
+
This is not stack rotation as with pushd but deletion of elements. If the topmost element is deleted then the current directory changes:
+
# Traverse several directories
+dave@i7:~$ pushd Test1
+~/Test1 ~
+dave@i7:~/Test1$ pushd Test2
+~/Test1/Test2 ~/Test1 ~
+dave@i7:~/Test1/Test2$ pushd Test3
+~/Test1/Test2/Test3 ~/Test1/Test2 ~/Test1 ~
+dave@i7:~/Test1/Test2/Test3$ pushd Test4
+~/Test1/Test2/Test3/Test4 ~/Test1/Test2/Test3 ~/Test1/Test2 ~/Test1 ~
+dave@i7:~/.../Test2/Test3/Test4$ dirs -v
+ 0 ~/Test1/Test2/Test3/Test4
+ 1 ~/Test1/Test2/Test3
+ 2 ~/Test1/Test2
+ 3 ~/Test1
+ 4 ~
+# Now use popd +n to remove numbered directories
+dave@i7:~/.../Test2/Test3/Test4$ popd +3
+~/Test1/Test2/Test3/Test4 ~/Test1/Test2/Test3 ~/Test1/Test2 ~
+dave@i7:~/.../Test2/Test3/Test4$ popd +0
+~/Test1/Test2/Test3 ~/Test1/Test2 ~
+dave@i7:~/Test1/Test2/Test3$ dirs -v
+ 0 ~/Test1/Test2/Test3
+ 1 ~/Test1/Test2
+ 2 ~
+# Then use popd -n to count from the bottom of the stack
+dave@i7:~/Test1/Test2/Test3$ popd -1
+~/Test1/Test2/Test3 ~
+dave@i7:~/Test1/Test2/Test3$ dirs -v
+ 0 ~/Test1/Test2/Test3
+ 1 ~
+
dirs
+
A slightly modified version of the dirs manpage is available below.
+
I have been using dirs -v to show the directory stack in what I think is a more readable form, but there are other options.
+
The -p option prints stack entries one per line without numbering. The -l option gives the full pathname without the tilde ('~') at the start denoting the home directory.
+
You can clear the entire directory stack using the -c option.
+
If you are thoroughly confused by the +n and -n options then using them with dirs makes it plainer what directories they refer to:
I used to use them a lot pre-Linux when I was working on older Unix systems like Ultrix, SunOS and Solaris. This was in the days of real terminals and before the days of tabbed terminal emulators and virtual desktops.
+
I used to find it very helpful to be able to stop what I was doing in one directory and pushd to another to check something or answer a question, then popd back where I came from.
+
Nowadays I tend to have several terminal emulators on several virtual desktops and each has multiple tabs, so I use them to separate out my directories. However, if you tend to work in a single terminal session you might like I used to you might find them useful.
+
+
Manual Pages
+
pushd [-n] [+n] [-n]
+
pushd [-n] [dir]
+
Adds a directory to the top of the directory stack, or rotates the stack, making the new top of the stack the current working directory. With no arguments, exchanges the top two directories and returns 0, unless the directory stack is empty. Arguments, if supplied, have the following meanings:
+
+
-n (a hyphen and the letter 'n')
+
Suppresses the normal change of directory when adding directories to the stack, so that only the stack is manipulated.
+
+
+n (a plus and an integer)
+
Rotates the stack so that the nth directory (counting from the left of the list shown by the dirs command, starting with zero) is at the top.
+
+
-n (a hyphen and an integer)
+
Rotates the stack so that the nth directory (counting from the right of the list shown by the dirs command, starting with zero) is at the top.
+
+
dir (the name of a directory)
+
Adds the directory dir to the directory stack at the top, making it the new current working directory as if it had been supplied as the argument to the cd builtin.
+
+
+
If the pushd command is successful, a dirs command is performed as well. If the first form is used, pushd returns 0 unless the cd to dir fails. With the second form, pushd returns 0 unless the directory stack is empty, a non-existent directory stack element is specified, or the directory change to the specified new current directory fails.
+
+
popd [-n] [+n] [-n]
+
Removes entries from the directory stack. With no arguments, removes the top directory from the stack, and performs a cd to the new top directory. Arguments, if supplied, have the following meanings:
+
+
-n (a hyphen and the letter 'n')
+
Suppresses the normal change of directory when removing directories from the stack, so that only the stack is manipulated.
+
+
+n (a plus and an integer)
+
Removes the nth entry counting from the left of the list shown by dirs, starting with zero. For example: popd +0 removes the first directory, popd +1 the second.
+
+
-n (a hyphen and an integer)
+
Removes the nth entry counting from the right of the list shown by dirs, starting with zero. For example: popd -0 removes the last directory, popd -1 the next to last.
+
+
+
If the popd command is successful, a dirs command is performed as well, and the return status is 0. popd returns false if an invalid option is encountered, the directory stack is empty, a non-existent directory stack entry is specified, or the directory change fails.
+
+
dirs [-clpv] [+n] [-n]
+
Without options, displays the list of currently remembered directories. The default display is on a single line with directory names separated by spaces. Directories are added to the list with the pushd command; the popd command removes entries from the list.
+
+
-c
+
Clears the directory stack by deleting all of the entries.
+
+
-l
+
Produces a listing using full pathnames; the default listing format uses a tilde to denote the home directory.
+
+
-p
+
Print the directory stack with one entry per line.
+
+
-v
+
Print the directory stack with one entry per line, prefixing each entry with its index in the stack.
+
+
+n (a plus and an integer)
+
Displays the nth entry counting from the left of the list shown by dirs when invoked without options, starting with zero.
+
+
-n (a hyphen and an integer)
+
Displays the nth entry counting from the right of the list shown by dirs when invoked without options, starting with zero.
+
+
+
The return value is 0 unless an invalid option is supplied or n indexes beyond the end of the directory stack.
+
+
+
+
+
+
+
diff --git a/eps/hpr1864/hpr1864_full_shownotes.html b/eps/hpr1864/hpr1864_full_shownotes.html
new file mode 100755
index 0000000..ff8ebde
--- /dev/null
+++ b/eps/hpr1864/hpr1864_full_shownotes.html
@@ -0,0 +1,141 @@
+
+
+
+
+
+
+
+ Turning an old printer into a network printer (HPR Show 1864)
+
+
+
+
+
+
+
+
+
Turning an old printer into a network printer (HPR Show 1864)
+
Dave Morriss
+
+
+
+
+
+
+
Table of Contents
+
+
+
Introduction
+
I have a USB printer I bought back in 2005 when I bought a Windows PC for the family. It's an HP PSC 2410 PhotoSmart All-in-One printer. This device is a colour inkjet printer, with a scanner, FAX and card-reading facilities. It has been left unused in a corner for many years, and I recently decided to to see if I could make use of it again, so I cleaned it up and bought some new ink cartridges for it.
+
This printer is well catered for under Linux and it is possible to access it using CUPS for the printing and SANE for scanning. I connected it to my Linux desktop for a while to prove that it was usable. However, rather than leaving it connected in this way, I wanted to turn it into a network printer that could be used by the rest of the family. My kids are mostly away at university these days but invariably need to print stuff when they pass through. I searched the Internet and found an article in the Raspberry Pi Geek magazine which helped with this project.
+
Using a Raspberry Pi
+
I decided to use my oldest Raspberry Pi to run this printer. I have a 2 USB port Model B that I bought when these devices first came out in 2012, and I haven't used it much. It's in one of the early PimoroniPibow acrylic rainbow cases.
+
I connected the Pi directly to my router with an ethernet cable. The router is on a shelf I put up in the corner of my living room and the Pi fits on there quite comfortably.
+
The printer is on a small table in the corner of the room under the router shelf. It's connected to the Pi with a USB cable. It needs a reasonable amount of space, mainly for the paper tray and the output tray at the front. It also needs room to be able to open the lid of the scanner for scanning and copying purposes.
+
The Pi is running Raspbian off a class 10 32Gb SD card. This is overkill for this function, but I already had the card. I made sure I had the latest Raspbian release on this card.
+
Because the power supply for the printer seems to consume power even when the printer is off I installed one of the radio-controlled switches I use in the house and turn it on and off with a remote control. The turning on and all of the activities of this printer are a matter of extreme fascination for my cat.
+
Picture: cat > printer
+
CUPS
+
I configured the Pi while it was connected to a monitor, keyboard and mouse, enabled SSH on it and put it in its final location, running headless. I normally assign all of my local machines a fixed DHCP address on the router and add this address to /etc/hosts on all of my client machines. This one ended up being called rpi1 with the address 192.168.0.66.
+
In the headless mode I installed CUPS. On Raspbian this brings in many other packages such as HP Linux Imaging and Printing (HPLIP), Scanner Access Now Easy (SANE) and many filters and printer drivers.
+
Once CUPS is installed it is fairly simple to configure it either through the hp-setup tool or the web interface. As you might have guessed, hp-setup is primarily used for setting up HP printers which interface to the HPLIP software. Since I was setting up an HP printer I used the command hp-setup -i for this and used it to configure a print queue with the hpcups driver.
+
CUPS Web Interface
+
If you are doing this you might prefer to use the web interface to CUPS, especially if you are using a non-HP printer. The interface is available on port 631, so in my case I simply point a browser to my Raspberry Pi with the following URL http://192.168.0.66:631/
+
In order to be able to perform administrative functions such as managing printers and print jobs, it is necessary to authenticate to CUPS and to use credentials which have the ability to perform printer administration.
+
I chose to give my account "dave" on the Raspberry Pi lpadmin rights:
+
sudo usermod -a -G lpadmin dave
+
The Raspberry Pi Geek magazine article recommends the following steps to make the printer visible from any address:
Using the web interface I was able to create CUPS printer queues on the Raspberry Pi.
+
Accessing the printer
+
The Raspberry Pi Geek magazine article gives advice on setting up print clients on remote systems. I did this on my Debian Testing KDE system by invoking Systems Settings and selecting the Printers entry. My son did the equivalent on his MacBook in order to print.
+
I have not yet managed to get my Android phone or Nexus 7 tablet to print this way, though they can both print to my networked HP LaserJet printer via the free HP Print Service plugin. I assume this does not use a standard LPR interface. I am not prepared to pay money to make a Unix device print and I have only found non-free Android apps so far.
+
At the time of writing I have not managed to set up my daughter's Windows 8 PC to use this printer. I realised that I could set up SAMBA on the Raspberry Pi, but was reluctant to do so.
+
My daughter is away at university now, but I recently found some helpful advice on this subject on the Arch Wiki. In particular, the use of IPP (Internet Printing Protocol) seems to be the best route. Interestingly the advice is not to use SAMBA, which appeals to me!
+
It looks as if we can simply add a printer with the address:
Using the scanner on the remote printer should be straightforward, but I encountered some problems with this.
+
According to the Raspberry Pi Geek magazine article it is necessary to make the SANE daemon run all the time by editing the file /etc/default/saned. I ensured mine contained the following:
+
RUN=yes
+RUN_AS_USER=saned
+
I also edited /etc/sane.d/saned.conf and added the line:
+
192.168.0.0/24
+
since all of my local network is in the range 192.168.0.0-192.168.0.255, and this allows access from everything in this range. I then started the daemon with:
+
sudo service saned restart
+
I then tried the following command on the Pi:
+
sudo scanimage -L
+
and saw the following response:
+
device `hpaio:/usb/psc_2400_series?serial=MY47KM22Y36T' is a Hewlett-Packard psc_2400_series all-in-one
+
Moving to my desktop system, which had CUPS installed, the same command did not find the scanner.
It took me a little while to work out what was happening here. To cut a long story short, I found the device and looked at the permissions on it:
+
root@rpi1:~# lsusb
+Bus 001 Device 002: ID 0424:9512 Standard Microsystems Corp.
+Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
+Bus 001 Device 003: ID 0424:ec00 Standard Microsystems Corp.
+Bus 001 Device 009: ID 03f0:3611 Hewlett-Packard PSC 2410 PhotoSmart
+
+root@rpi1:~# ls -l /dev/bus/usb/001/009
+crw-rw-r-T 1 root lp 189, 8 Mar 1 17:17 /dev/bus/usb/001/009
+
The USB device is owned by root and the group lp. However, the user saned has the following groups:
+
root@rpi1:~# id saned
+uid=110(saned) gid=114(saned) groups=114(saned),110(scanner)
+
So user saned cannot access the device.
+
It seemed that the simplest solution was to add saned to the lp group:
+
root@rpi1:~# usermod -a -G lp saned
+root@rpi1:~# id saned
+uid=110(saned) gid=114(saned) groups=114(saned),7(lp),110(scanner)
+
Now scanimage -L from the remote client returned:
+
root@i7-desktop:~# scanimage -L
+device `net:192.168.0.66:hpaio:/usb/psc_2400_series?serial=MY47KM22Y36T' is a Hewlett-Packard psc_2400_series all-in-one
+
Now it is possible for a remote system to access the scanner through GIMP, Xsane and other SANE interfaces.
+
It is not clear why the standard CUPS installation creates user saned in the scanner group where the device is owned by the lp group. I have not determined if this is a problem with the saned user or the UDEV code that creates the device.
+
Accessing the scanner from Mac OSX and Windows
+
I have not yet managed to test these options. When a scan is required I run it on a Linux system and email it or share it via DropBox.
+
+
diff --git a/eps/hpr1864/hpr1864_printer_monitor.png b/eps/hpr1864/hpr1864_printer_monitor.png
new file mode 100755
index 0000000..73fc57f
Binary files /dev/null and b/eps/hpr1864/hpr1864_printer_monitor.png differ
diff --git a/eps/hpr1884/hpr1884_full_shownotes.html b/eps/hpr1884/hpr1884_full_shownotes.html
new file mode 100755
index 0000000..7abd3b2
--- /dev/null
+++ b/eps/hpr1884/hpr1884_full_shownotes.html
@@ -0,0 +1,161 @@
+
+
+
+
+
+
+
+ Some more Bash tips (HPR Show 1884)
+
+
+
+
+
+
+
+
+
Some more Bash tips (HPR Show 1884)
+
Dave Morriss
+
+
+
+
+
+
+
Table of Contents
+
+
+
Introduction
+
We looked at Parameter Expansion back in HPR episode 1648 where we saw how Bash variables could be used, checked and edited. There are other sorts of expansions within Bash, and we'll look at one called "Brace Expansion" in this episode, which follows on from episode 1843 "Some Bash tips".
+
I have included some extracts from the Bash manual page for reference.
+
Brace expansion
+
In brace expansion an expression enclosed in braces is expanded. The expression can be a list of comma separated strings or a sequence expression.
+
Comma separated strings
+
This can just be a series of letters such as:
+
echo c{a,e,i,o,u}t
+-> cat cet cit cot cut
+
Note: The line beginning '->' is what will be generated by the above statement. I will be using this method of signifying output through these notes.
+
Here you see that each of the letters is inserted in between the letters c and t to make the list shown. This does not work inside quotes:
+
echo "c{a,e,i,o,u}t"
+-> c{a,e,i,o,u}t
+
The comma separated strings can be longer than a single character:
The x and y parts can also be single characters with an optional (numeric) increment such as:
+
echo {a..j}
+-> a b c d e f g h i j
+echo {a..j..2}
+-> a c e g i
+echo {z..t..-2}
+-> z x v t
+
Uses for brace expansion
+
The two forms are often used to generate lists of filenames for use in commands which take multiple arguments. For example, if a directory contains image files interspersed with other files, which you wish to examine with the ls command you might type:
+
ls -l ~/Pictures/*.{jpg,png}
+
You might want to grep a set of log files for a particular string, restricting yourself to a range of days:
+
grep "^Reset" -A7 logs/*201509{01..13}*.log
+
It's a useful way to generate sequences in a loop:
+
for i in {0..10}; do echo $i; done
+
I was recently experimenting with the printf command and its '%b' argument, and I used the following statements to show what it did:
To explain, '%b' expects its argument to be a string with values such as '20' meaning hex 20, which it turns into the corresponding character. You can see that hex 20 is a space, hex 21 and exclamation mark and so forth.
+
The seq command
+
There is a command seq which can do a lot of what the Bash brace expansion sequence expressions can do, and a few things more. A copy of the manual page is included below.
+
The seq command treats its values as floating point values, so it is possible to do things like:
Brace expansion is a mechanism by which arbitrary strings may be generated. This mechanism is similar to pathname expansion, but the filenames generated need not exist. Patterns to be brace expanded take the form of an optional preamble, followed by either a series of comma-separated strings or a sequence expression between a pair of braces, followed by an optional postscript. The preamble is prefixed to each string contained within the braces, and the postscript is then appended to each resulting string, expanding left to right.
+
Brace expansions may be nested. The results of each expanded string are not sorted; left to right order is preserved. For example, 'a{d,c,b}e' expands into 'ade ace abe'.
+
A sequence expression takes the form {x..y[..incr]}, where x and y are either integers or single characters, and incr, an optional increment, is an integer. When integers are supplied, the expression expands to each number between x and y, inclusive. Supplied integers may be prefixed with 0 to force each term to have the same width. When either x or y begins with a zero, the shell attempts to force all generated terms to contain the same number of digits, zero-padding where necessary. When characters are supplied, the expression expands to each character lexicographically between x and y, inclusive, using the default C locale. Note that both x and y must be of the same type. When the increment is supplied, it is used as the difference between each term. The default increment is 1 or -1 as appropriate.
+
Brace expansion is performed before any other expansions, and any characters special to other expansions are preserved in the result. It is strictly textual. Bash does not apply any syntactic interpretation to the context of the expansion or the text between the braces.
+
A correctly-formed brace expansion must contain unquoted opening and closing braces, and at least one unquoted comma or a valid sequence expression. Any incorrectly formed brace expansion is left unchanged. A { or , may be quoted with a backslash to prevent its being considered part of a brace expression. To avoid conflicts with parameter expansion, the string ${ is not considered eligible for brace expansion.
+
This construct is typically used as shorthand when the common prefix of the strings to be generated is longer than in the above example:
+
mkdir /usr/local/src/bash/{old,new,dist,bugs}
+
or chown root /usr/{ucb/{ex,edit},lib/{ex?.?*,how_ex}}
+
Brace expansion introduces a slight incompatibility with historical versions of sh. sh does not treat opening or closing braces specially when they appear as part of a word, and preserves them in the output. Bash removes braces from words as a consequence of brace expansion. For example, a word entered to sh as file{1,2} appears identically in the output. The same word is output as file1 file2 after expansion by bash. If strict compatibility with sh is desired, start bash with the +B option or disable brace expansion with the +B option to the set command (see SHELL BUILTIN COMMANDS below).
+
+
NAME
+
seq - print a sequence of numbers
+
SYNOPSIS
+
seq [OPTION]... LAST
+seq [OPTION]... FIRST LAST
+seq [OPTION]... FIRST INCREMENT LAST
+
DESCRIPTION
+
Print numbers from FIRST to LAST, in steps of INCREMENT.
+
+Mandatory arguments to long options are mandatory for short options too.
+
+-f, --format=FORMAT
+ use printf style floating-point FORMAT
+
+-s, --separator=STRING
+ use STRING to separate numbers (default: \n)
+
+-w, --equal-width
+ equalize width by padding with leading zeroes
+
+--help display this help and exit
+
+--version
+ output version information and exit
+
+If FIRST or INCREMENT is omitted, it defaults to 1. That is, an omitted
+INCREMENT defaults to 1 even when LAST is smaller than FIRST. The
+sequence of numbers ends when the sum of the current number and INCREMENT
+would become greater than LAST. FIRST, INCREMENT, and LAST are
+interpreted as floating point values. INCREMENT is usually positive if
+FIRST is smaller than LAST, and INCREMENT is usu‐ ally negative if FIRST
+is greater than LAST. FORMAT must be suitable for printing one argument
+of type 'double'; it defaults to %.PRECf if FIRST, INCREMENT, and LAST are
+all fixed point decimal num‐ bers with maximum precision PREC, and to %g
+otherwise.
+
AUTHOR
+
Written by Ulrich Drepper.
+
+
+
+
+
+
diff --git a/eps/hpr1903/hpr1903_full_shownotes.html b/eps/hpr1903/hpr1903_full_shownotes.html
new file mode 100755
index 0000000..e7dfdb0
--- /dev/null
+++ b/eps/hpr1903/hpr1903_full_shownotes.html
@@ -0,0 +1,240 @@
+
+
+
+
+
+
+
+ Some further Bash tips (HPR Show 1903)
+
+
+
+
+
+
+
+
+
Some further Bash tips (HPR Show 1903)
+
Dave Morriss
+
+
+
+
+
+
+
Table of Contents
+
+
+
Expansion
+
There are seven types of expansion applied to the command line in the following order:
+
+
Brace expansion (we looked at this subject in the last episode 1884)
+
Tilde expansion
+
Parameter and variable expansion (this was covered in episode 1648)
+
Command substitution
+
Arithmetic expansion
+
Word splitting
+
Pathname expansion
+
+
We will look at some more of these in this episode but since there is a lot to cover, we'll continue in a later episode.
+
Tilde expansion
+
This is a convenient way of referring to a home directory in a file path, though there are other less well-known uses which we will also examine.
+
Tilde on its own
+
Consider the following example. Imagine you are in the directory Documents and you want to look at your .bashrc file. Here are some ways of doing this:
+
cd Documents
+less ../.bashrc
+less $HOME/.bashrc
+less ~/.bashrc
+
+
The first method uses .. to refer to the directory above the current one.
+
The second uses the variable HOME, which is usually created for you when you login, and points to your home directory.
+
The third method uses a plain tilde (~) which means the home directory of the current user.
+
+
Actually the tilde in this example uses the contents of the HOME variable, just like the example above it. If you happened to change this variable for some reason the changed version would be used. If there is no HOME variable then the defined home directory of the current user will be looked up.
+
Note: The line beginning '->' is what will be generated by the following statements. I will be using this method of signifying output throughout these notes (unless it's confusing).
Warning changing HOME can lead to great confusion, so it's not recommended. For example, after such a change the cd command without any argument moves to wherever HOME points to rather than to the expected home directory. This is a demonstration, not a recommendation!
+
Tilde and a login name
+
If the tilde is followed by a login name (username) then it refers to the home directory of that login name:
This is useful for example in multi-user environments where you want to copy files to or from someone else's directory - assuming the permissions have been set to allow this of course.
+
By the way, if you have changed the HOME variable it can be reset either by logging out and back in again or with the ~login_name form as in the following:
+
HOME=~hprdemo
+echo ~
+-> /home/hprdemo
+
Like many instances in Bash, the login name after the tilde can be completed by pressing the Tab key. If you happen to work in an environment with many login names, then take care when doing this since it might require a search of the entire name space. I used to work at a University with up to 50,000 login names, and pressing Tab inappropriately could result in a big search and a very long delay!
+
Tilde with a plus sign
+
There are other forms of tilde expansion. First, ~+ uses the value of the PWD variable. This variable is used by Bash to track the directory you are currently in.
+
cd Documents
+echo ~+
+-> /home/hprdemo/Documents
+
Tilde and a minus sign
+
There is another variable, OLDPWD that is used to hold the previous contents of PWD. This can be accessed with ~-.
+
cd Documents
+echo ~-
+-> /home/hprdemo
+echo ~+
+-> /home/hprdemo/Documents
+
Tilde and the directory stack
+
There is one more way in which tilde expansion can be used in Bash. This links to the directory stack that we looked at in show 1843. In that show we saw the pushd and popd commands for manipulating the stack by adding and removing directories. We also saw the dirs command for showing the contents of the stack.
+
Using ~ followed by a + or a - and a number references a directory on the stack. Using dirs -v we can see the stack with numbered entries (we're not using the -> here as it might be confusing):
+
dirs -v
+0 ~/Documents
+1 ~
+
In such a case the tilde sequence ~1 (or ~+1) references the stack element numbered 1 above:
+
echo ~1
+-> /home/hprdemo
+
Note how the tilde stored in the stack representation is expanded in this example.
+
The directory returned is the same as that reported by dirs -l +1 where the -l option requests the full form be displayed.
+
As discussed in show 1843 we can also reference stack elements in reverse order. So the tilde expression ~-1 in the above scenario will return the second element counting from the bottom:
+
echo ~-1
+-> /home/hprdemo/Documents
+
The directory returned is the same as that reported by dirs -l -1.
+
Tilde expansion in variables
+
Normally the tilde forms we have looked at would be used in file system paths when referring to files or directories. It is also possible to assign their values to variables, such as:
Bash provides special variables (and other software might need its own such variables) which contain lists of paths separated by colons (:). For example, the PATH variable, which contains paths used when searching for commands:
Notice how the addition to the PATH variable was not enclosed in quotes. If it had been then the tilde expansion would not have taken place.
+
Command Substitution
+
Commands often write output. Unless told otherwise they write this output to a channel known as standard output (STDOUT). It is possible to capture this output channel and use it in many contexts.
+
Take for example the date command. This reports a date and, optionally a time, in various formats. To get today's date in the ISO8601 format (the only sane format, which everyone should adopt) the following command could be used:
+
date +%Y-%m-%d
+-> 2015-11-04
+
This output could be captured in a variable using command substitution as follows:
The format $(command) for command substitution is the recommended one to use. There is an older form which uses backquotes around the command. The above example could be rewritten as:
We will discuss only the $(command) form in these notes. See the manual page extract for details of the other format.
+
The text returned by the command is processed to remove newlines. The following example shows the date command being used to generate multi-line output (the sequence %n generates a newline in output from date, and we're not using the -> here to avoid confusion):
+
date +"Today's date is%n%Y-%m-%d"
+Today's date is
+2015-11-04
+
(Note that the argument to date is quoted because it contains spaces).
+
Using this in command substitution we get a different result:
+
today=$(date +"Today's date is%n%Y-%m-%d%n")
+echo $today
+-> Today's date is 2015-11-04
+
The embedded newline has been removed and replaced by a space.
+
As a final example, consider the following. A file words exists with one word per line. We want to construct a Bash loop which processes this file. To keep things simple we'll just echo each word followed by its length:
+
for w in $(cat words)
+do
+ echo "$w (${#w})"
+done
+
The for loop is simply given a list of words from the file by virtue of the command substitution $(cat words), which it then places one at a time into the variable w. We use the construct ${#w} to determine the length as discussed in show 1648.
+
Some typical output might be:
+
bulkier (7)
+laxness (7)
+house (5)
+overshoe (8)
+
There is an alternative (and faster) way of doing this without using cat:
+
for w in $(< words)
+do
+ echo "$w (${#w})"
+done
+
This is a real example; I always test the commands in my notes to check I have not made any glaring mistakes. You might be interested to know how I generated the file of words:
+
for i in {1..10}
+do
+ w=$(shuf -n1 /usr/share/dict/words)
+ w=${w%[^a-zA-Z]*}
+ echo $w
+done > words
+
The loop uses brace expansion as discussed in show 1884; it iterates 10 times. The shuf command is used to extract one line (word) at random from the system dictionary. Because many of these words have possessive forms, I wanted to strip the apostrophe and anything beyond it and I did that with an instance of Remove matching suffix pattern as discussed in show 1648. It removes any suffix consisting of a non-alphabetic character followed by others.
+
The resulting word is simply echoed.
+
The entire loop redirects its output (a list of 10 words) into the file words. We might be visiting the subject of redirection in a later show in this (sub-)series.
+
Since shuf can return multiple random words at a time, and since the removal of extraneous characters could have been done in the echo, this example could also have been written as:
+
for w in $(shuf -n10 /usr/share/dict/words)
+do
+ echo ${w%[^a-zA-Z]*}
+done > words
+
I tend to write short Bash loops of this sort on one line:
+
for w in $(shuf -n10 /usr/share/dict/words); do echo ${w%[^a-zA-Z]*}; done > words
Expansion is performed on the command line after it has been split into words. There are seven kinds of expansion performed: brace expansion, tilde expansion, parameter and variable expansion, command substitution, arithmetic expansion, word splitting, and pathname expansion.
+
The order of expansions is: brace expansion; tilde expansion, parameter and variable expansion, arithmetic expansion, and command substitution (done in a left-to-right fashion); word splitting; and pathname expansion.
+
On systems that can support it, there is an additional expansion available: process substitution. This is performed at the same time as tilde, parameter, variable, and arithmetic expansion and command substitution.
+
Only brace expansion, word splitting, and pathname expansion can change the number of words of the expansion; other expansions expand a single word to a single word. The only exceptions to this are the expansions of "$@" and "${name[@]}" as explained above (see PARAMETERS).
If a word begins with an unquoted tilde character (~), all of the characters preceding the first unquoted slash (or all characters, if there is no unquoted slash) are considered a tilde-prefix. If none of the characters in the tilde-prefix are quoted, the characters in the tilde-prefix following the tilde are treated as a possible login name. If this login name is the null string, the tilde is replaced with the value of the shell parameter HOME. If HOME is unset, the home directory of the user executing the shell is substituted instead. Otherwise, the tilde-prefix is replaced with the home directory associated with the specified login name.
+
If the tilde-prefix is a ~+, the value of the shell variable PWD replaces the tilde-prefix. If the tilde-prefix is a ~-, the value of the shell variable OLDPWD, if it is set, is substituted. If the characters following the tilde in the tilde-prefix consist of a number N, optionally prefixed by a +' or a-', the tilde-prefix is replaced with the corresponding element from the directory stack, as it would be displayed by the dirs builtin invoked with the tilde-prefix as an argument. If the characters following the tilde in the tilde-prefix consist of a number without a leading + or -, + is assumed.
+
If the login name is invalid, or the tilde expansion fails, the word is unchanged.
+
Each variable assignment is checked for unquoted tilde-prefixes immediately following a : or the first =. In these cases, tilde expansion is also performed. Consequently, one may use filenames with tildes in assignments to PATH, MAILPATH, and CDPATH, and the shell assigns the expanded value.
Command substitution allows the output of a command to replace the command name. There are two forms:
+
$(command)
+or
+ `command`
+
Bash performs the expansion by executing command and replacing the command substitution with the standard output of the command, with any trailing newlines deleted. Embedded newlines are not deleted, but they may be removed during word splitting. The command substitution $(cat file) can be replaced by the equivalent but faster $(< file).
+
When the old-style backquote form of substitution is used, backslash retains its literal meaning except when followed by $, `, or \. The first backquote not preceded by a backslash terminates the command substitution. When using the $(command) form, all characters between the parentheses make up the command; none are treated specially.
+
Command substitutions may be nested. To nest when using the backquoted form, escape the inner backquotes with backslashes.
+
If the substitution appears within double quotes, word splitting and pathname expansion are not performed on the results.
+
+
+
+
+
+
diff --git a/eps/hpr1938/hpr1938_full_shownotes.html b/eps/hpr1938/hpr1938_full_shownotes.html
new file mode 100755
index 0000000..f270f19
--- /dev/null
+++ b/eps/hpr1938/hpr1938_full_shownotes.html
@@ -0,0 +1,172 @@
+
+
+
+
+
+
+
+ How I prepare HPR shows (HPR Show 1938)
+
+
+
+
+
+
+
+
+
How I prepare HPR shows (HPR Show 1938)
+
Dave Morriss
+
+
+
+
+
+
+
Table of Contents
+
+
+
Introduction
+
I have been contributing shows to Hacker Public Radio since 2012. In those far off days (!) we sent everything in via FTP, and had to name the files with a combination of our host id, our name, the slot number and the title. The show notes had to contain a chunk of metadata in a defined format to signal all of the various attributes of the show. I found myself making numerous mistakes with this naming and metadata formatting and so started designing and writing some tools to protect myself from my own errors.
+
I started developing a Bash script in mid-2013 which I called hpr_talk. I used Bash since I thought I might be able to make something with a small footprint that I could share, which might be useful to others. The script grew and grew and became increasingly complex and I found I needed to add other scripts to the toolkit and to resort to Perl and various Perl modules to perform some actions.
+
Then in 2014 Ken changed the upload procedure to what it is now. This is a much better design and does away with the need to name files in odd ways and add metadata to them. However, this left my toolkit a bit high and dry, so I shelved the plans to release it.
+
Since then I have been enhancing the hpr_talk toolkit, adding features that I found useful and removing bugs, until the present time. Now it is probably far too complex and idiosyncratic to be of direct use to others, and is rather too personalised to my needs to be easily shared. Nevertheless, it is available on GitLab and I am going to describe it here in case it (or the methods used) might be of interest to anyone.
+
Overview
+
As I have already said, the main script is called hpr_talk. It and its supporting scripts and files need to be unpacked into a directory where the shows are going to be stored. The principle is that each show and its related files are stored in their own sub-directory. The script derives the name of the directory from the title of the show it holds.
+
I keep my scripts and HPR episodes in the directory ~/HPR/Talks/, but there's no problem with placing them anywhere so long as the directory is readable and writable by the account used to manage things, and everything is kept together.
+
The hpr_talk script takes one or two arguments. The first is an action word which specifies the function you are carrying out (such as install or create). The second argument varies with the function you are using and is optional. If present it can be the name of the directory holding your show or can be the title of a show.
+
If the argument is -h then the script displays a help message.
+
My Work-flow
+
I have tried to give a fairly brief summary about how I use hpr_talk when creating and developing a episode for HPR. These are the main steps:
+
+
Create a new episode. I sometimes do this when I have an idea for a show, just as a placeholder. During creation the script collects all the metadata relating to the show by asking questions, setting defaults where I'm not yet sure of the answer.
+
Having created the show, I might mess around with the title, summary, tags and so forth before I'm happy with them. This can be done at any time in the life-cycle of the episode.
+
Mostly the shows I create consist of brief notes for the main page, and some longer notes available off that page (this one is an example). My notes are all prepared in Markdown, but I convert them to HTML before uploading. I might have example files, scripts and pictures in some cases. If I do then they and the long notes are packaged into a compressed TAR file for upload.
+
Since I'm generating HTML from Markdown I have added features to hpr_talk to aid me with this process. I create boilerplate Markdown files with the script, and I generate a Makefile which allows me to automate the build using GNU make. Since I use Vim to edit my notes I generate the configuration file necessary to run the Session plug-in. This way all I have to do is open the session in gVim and all the files are ready to edit.
+
If I am including multiple files in my show, such as pictures or scripts, I list them in a manifest file. The hpr_talk script allows me to create this file, and the Makefile uses it when generating the TAR file.
+
As I work on the notes I rebuild the HTML version and view it in a browser. The build process can be run out of Vim using the make command, or through the hpr_talk script on the command line. References to files and images are relative to the local environment at this stage, so the links work, but the command make final (or an appropriate script option) can be used to build everything to work on the HPR server.
+
The audio is generated independently, perhaps using Audacity or a portable recorder. I save the raw audio in the show directory and edit it and finally export it to the show directory. I use Audacity to save generic audio tags, but I have the means of generating specific tags automatically from the configuration file. This is not currently part of the system available on GitLab due to the number of dependencies.
+
As the components of a show accumulate the files are "registered" through a script function. The design is such that the primary elements are:
+
+
the main notes
+
the supplementary files bundled into a TAR file (as defined by the manifest file)
+
the audio
+
+
Once everything is ready to be uploaded I reserve a slot and then use hpr_talk to record the selection in the configuration file. This causes files to be renamed automatically to reflect the show they belong to. It also causes any HTML references to be changed in the notes when I perform the final build. I fill in the upload form manually with the details from the configuration, and the main notes. Then I use hpr_talk to perform the upload, which it does using curl to write to the FTP server.
+
+
Features of the hpr_talk script
+
The actions supported by the script are:
+
Install the script
+
This is something that is normally done only once. It sets up all of the configuration files needed by the script. It asks for the HPR host id and host name, so is really only appropriate if the user has already received these. This function also allows updating so is useful if something changes like the FTP password, for example.
+
Create a new show
+
As mentioned previously, the script will prompt for a number of parameters which it will store away in a show-specific configuration file in the show directory (which it will create). While the eventual show number (slot) is unknown the main files are called hpr____.* and then renamed when a slot has been reserved.
+
Change the configuration of a show
+
This function allows viewing and manipulation of the configuration of the show. For example, if you want to change the title, or update the summary, or when you have decided on the slot.
+
Advanced configuration
+
Here some of the more complex features can be set up:
+
+
Create the main show note template
+
Create the full (extended) show note template
+
Create and populate a manifest file (called manifest)
+
Create a Makefile to drive the building of the show files. Several options can be enabled here, such as whether to produce ePub notes
+
Create a Vim session. You need to be using Vim or gVim with the 'Session' plug-in to use this.
+
+
Build the notes
+
As already mentioned this function runs various types of make commands to perform the actions needed to build notes.
+
+
Build notes: build the main notes into HTML
+
Build all: build all of the options chosen for the show such as full notes, ePub notes, TAR file. Do this for local viewing
+
Build final: same as 'build all' but use URLs suitable for the HPR site
+
Touch all: refresh all files to force make to rebuild them
+
+
Register the files you're going to upload
+
This lets you register the various files you have prepared for the show. It stores their details in the show configuration file.
+
Release the show
+
This is for uploading the show via FTP. You need to have registered everything in the configuration file first and should have built the final versions of everything.
+
Report the status of a show or all pending shows
+
There are two functions here. The first, status, prints a summary of the state of a show, mainly reporting the contents of the configuration file.
+
The summary function scans through all of the shows you have prepared but not uploaded and reports on their current state. This is useful if you are like me and have multiple ideas and partially completed shows stored in the main directory.
+
Example use
+
The following shows the output generated by hpr_talk when checking the status of this particular episode. The audio is done and registered, but a slot has not yet been chosen:
+
cendjm@i7-desktop:~/HPR/Talks$ ./hpr_talk status How_I_prepare_HPR_shows
+ Status of an HPR talk
+ --------------------------------------------------------------------------------
+
+ 1: Hostid: 225
+ 2: Hostname: Dave Morriss
+ 3: Email: perloid@autistici.org
+ 4: Project: How_I_prepare_HPR_shows
+ 5: Title: How I prepare HPR shows
+ 6: Slot: -
+ 7: Sumadded: No
+ 8: Inout: No
+ 9: Series: -
+ 10: Tags: Markdown,Pandoc,ePub,Bash,Perl,FTP
+ 11: Explicit: Yes
+ 12: Summary: I use my own tools for preparing my HPR shows. I talk about them in this episode
+ 13: Notetype: HTML
+ 14: Structure: Flat
+ 15: Status: Editing
+ Dir: /home/cendjm/HPR/Talks/How_I_prepare_HPR_shows
+ Files: hpr____.flac
+ Files: hpr____.html
+ Files: hpr____.tbz
+
+ Size of HTML notes: 2429 characters
+
Show Notes
+
As already mentioned the notes are expected to be in Markdown format. The tool used to process the Markdown is pandoc and certain assumptions have been made about how this is run. For example, the main notes are built as an HTML fragment, while the extended notes are built stand-alone. Also the extended notes refer to the HPR site for their CSS to make them compatible with the HPR look and feel.
+
If the ePub option is chosen then certain assumptions are made about the layout of the end product. This part of the design is really in a state of flux at the moment and is very much attuned to my tastes.
+
The notes are really a template
+
The main and supplementary notes files are both passed through a pre-processor before being read by pandoc. This is a Perl script which interprets expressions in the Template Toolkit syntax. The pre-processor is given arguments consisting of the names of files such as the extended notes and the contents of the manifest file. This simplifies the process of linking to supplementary files and images and allows the generated URLs to change depending on whether the HTML is for local viewing or is the final version for upload.
+
For example, my main notes file often ends with text such as the following:
+
I have written out a moderately long set of notes about this subject and these
+are available here [[% args.0 %]]([% args.0 %]).
+
Here [% args.0 %] is an expression which substitutes the first argument to the pre-processor into the text. These expressions are enclosed in a Markdown link in this example.
+
I also recently prepared a show with multiple images and inserted them thus:
+
[%- DEFAULT i = 0 -%]
+\
+*Slice the carrots diagonally*
+
+\
+*The slices should be moderately thick, about 5mm*
+
This generates the argument references programmatically, which I found was easier to manage when there were 30+ images.
+
Conclusion
+
The toolkit described here does a large amount of what I want when preparing HPR shows, and saves me from my failing memory. The present design still shows the signs of its origin in the days before the current submission mechanism, and this will be corrected in time.
+
I'd like to automate the completion of the submission form in a later version, though whether Ken will appreciate me attaching scripts to it I doubt.
+
You are welcome to try the scripts out if you want; that's why it's on GitLab. There is not much documentation at the moment, but I am adding to it gradually. Please contact me if you have problems or suggestions on how to improve the project.
+
+
diff --git a/eps/hpr1941/hpr1941_J_Herbin_1.png b/eps/hpr1941/hpr1941_J_Herbin_1.png
new file mode 100755
index 0000000..cca05ef
Binary files /dev/null and b/eps/hpr1941/hpr1941_J_Herbin_1.png differ
diff --git a/eps/hpr1941/hpr1941_J_Herbin_2.png b/eps/hpr1941/hpr1941_J_Herbin_2.png
new file mode 100755
index 0000000..fa91cd3
Binary files /dev/null and b/eps/hpr1941/hpr1941_J_Herbin_2.png differ
diff --git a/eps/hpr1941/hpr1941_J_Herbin_3.png b/eps/hpr1941/hpr1941_J_Herbin_3.png
new file mode 100755
index 0000000..29c45cd
Binary files /dev/null and b/eps/hpr1941/hpr1941_J_Herbin_3.png differ
diff --git a/eps/hpr1941/hpr1941_J_Herbin_writing_s.png b/eps/hpr1941/hpr1941_J_Herbin_writing_s.png
new file mode 100755
index 0000000..74922da
Binary files /dev/null and b/eps/hpr1941/hpr1941_J_Herbin_writing_s.png differ
diff --git a/eps/hpr1941/hpr1941_Noodlers_Konrad_1.png b/eps/hpr1941/hpr1941_Noodlers_Konrad_1.png
new file mode 100755
index 0000000..ad2d8cd
Binary files /dev/null and b/eps/hpr1941/hpr1941_Noodlers_Konrad_1.png differ
diff --git a/eps/hpr1941/hpr1941_Noodlers_Konrad_2.png b/eps/hpr1941/hpr1941_Noodlers_Konrad_2.png
new file mode 100755
index 0000000..8b2c5d3
Binary files /dev/null and b/eps/hpr1941/hpr1941_Noodlers_Konrad_2.png differ
diff --git a/eps/hpr1941/hpr1941_Noodlers_Konrad_3.png b/eps/hpr1941/hpr1941_Noodlers_Konrad_3.png
new file mode 100755
index 0000000..0af3c3a
Binary files /dev/null and b/eps/hpr1941/hpr1941_Noodlers_Konrad_3.png differ
diff --git a/eps/hpr1941/hpr1941_Noodlers_Konrad_writing_s.png b/eps/hpr1941/hpr1941_Noodlers_Konrad_writing_s.png
new file mode 100755
index 0000000..37f9ce8
Binary files /dev/null and b/eps/hpr1941/hpr1941_Noodlers_Konrad_writing_s.png differ
diff --git a/eps/hpr1941/hpr1941_Pelikan_M215_1.png b/eps/hpr1941/hpr1941_Pelikan_M215_1.png
new file mode 100755
index 0000000..0b05343
Binary files /dev/null and b/eps/hpr1941/hpr1941_Pelikan_M215_1.png differ
diff --git a/eps/hpr1941/hpr1941_Pelikan_M215_2.png b/eps/hpr1941/hpr1941_Pelikan_M215_2.png
new file mode 100755
index 0000000..50eb1cd
Binary files /dev/null and b/eps/hpr1941/hpr1941_Pelikan_M215_2.png differ
diff --git a/eps/hpr1941/hpr1941_Pelikan_M215_3.png b/eps/hpr1941/hpr1941_Pelikan_M215_3.png
new file mode 100755
index 0000000..a1a514a
Binary files /dev/null and b/eps/hpr1941/hpr1941_Pelikan_M215_3.png differ
diff --git a/eps/hpr1941/hpr1941_Pelikan_M215_writing_s.png b/eps/hpr1941/hpr1941_Pelikan_M215_writing_s.png
new file mode 100755
index 0000000..9731372
Binary files /dev/null and b/eps/hpr1941/hpr1941_Pelikan_M215_writing_s.png differ
diff --git a/eps/hpr1941/hpr1941_Pen_Case_1.png b/eps/hpr1941/hpr1941_Pen_Case_1.png
new file mode 100755
index 0000000..4e49973
Binary files /dev/null and b/eps/hpr1941/hpr1941_Pen_Case_1.png differ
diff --git a/eps/hpr1941/hpr1941_Pen_Case_2.png b/eps/hpr1941/hpr1941_Pen_Case_2.png
new file mode 100755
index 0000000..d7c1b77
Binary files /dev/null and b/eps/hpr1941/hpr1941_Pen_Case_2.png differ
diff --git a/eps/hpr1941/hpr1941_Pilot_MR_1.png b/eps/hpr1941/hpr1941_Pilot_MR_1.png
new file mode 100755
index 0000000..fc3db09
Binary files /dev/null and b/eps/hpr1941/hpr1941_Pilot_MR_1.png differ
diff --git a/eps/hpr1941/hpr1941_Pilot_MR_2.png b/eps/hpr1941/hpr1941_Pilot_MR_2.png
new file mode 100755
index 0000000..c370a7d
Binary files /dev/null and b/eps/hpr1941/hpr1941_Pilot_MR_2.png differ
diff --git a/eps/hpr1941/hpr1941_Pilot_MR_3.png b/eps/hpr1941/hpr1941_Pilot_MR_3.png
new file mode 100755
index 0000000..46327e9
Binary files /dev/null and b/eps/hpr1941/hpr1941_Pilot_MR_3.png differ
diff --git a/eps/hpr1941/hpr1941_Pilot_MR_writing_s.png b/eps/hpr1941/hpr1941_Pilot_MR_writing_s.png
new file mode 100755
index 0000000..7f258c9
Binary files /dev/null and b/eps/hpr1941/hpr1941_Pilot_MR_writing_s.png differ
diff --git a/eps/hpr1941/hpr1941_Reform_1745_1.png b/eps/hpr1941/hpr1941_Reform_1745_1.png
new file mode 100755
index 0000000..c7eb706
Binary files /dev/null and b/eps/hpr1941/hpr1941_Reform_1745_1.png differ
diff --git a/eps/hpr1941/hpr1941_Reform_1745_2.png b/eps/hpr1941/hpr1941_Reform_1745_2.png
new file mode 100755
index 0000000..57a9154
Binary files /dev/null and b/eps/hpr1941/hpr1941_Reform_1745_2.png differ
diff --git a/eps/hpr1941/hpr1941_Reform_1745_3.png b/eps/hpr1941/hpr1941_Reform_1745_3.png
new file mode 100755
index 0000000..2725bc5
Binary files /dev/null and b/eps/hpr1941/hpr1941_Reform_1745_3.png differ
diff --git a/eps/hpr1941/hpr1941_Reform_1745_writing_s.png b/eps/hpr1941/hpr1941_Reform_1745_writing_s.png
new file mode 100755
index 0000000..a64319c
Binary files /dev/null and b/eps/hpr1941/hpr1941_Reform_1745_writing_s.png differ
diff --git a/eps/hpr1941/hpr1941_TWSBI_ECO_1.png b/eps/hpr1941/hpr1941_TWSBI_ECO_1.png
new file mode 100755
index 0000000..427ff74
Binary files /dev/null and b/eps/hpr1941/hpr1941_TWSBI_ECO_1.png differ
diff --git a/eps/hpr1941/hpr1941_TWSBI_ECO_2.png b/eps/hpr1941/hpr1941_TWSBI_ECO_2.png
new file mode 100755
index 0000000..cd1f35e
Binary files /dev/null and b/eps/hpr1941/hpr1941_TWSBI_ECO_2.png differ
diff --git a/eps/hpr1941/hpr1941_TWSBI_ECO_3.png b/eps/hpr1941/hpr1941_TWSBI_ECO_3.png
new file mode 100755
index 0000000..4e851f9
Binary files /dev/null and b/eps/hpr1941/hpr1941_TWSBI_ECO_3.png differ
diff --git a/eps/hpr1941/hpr1941_TWSBI_ECO_writing_s.png b/eps/hpr1941/hpr1941_TWSBI_ECO_writing_s.png
new file mode 100755
index 0000000..5552cda
Binary files /dev/null and b/eps/hpr1941/hpr1941_TWSBI_ECO_writing_s.png differ
diff --git a/eps/hpr1941/hpr1941_full_shownotes.html b/eps/hpr1941/hpr1941_full_shownotes.html
new file mode 100755
index 0000000..56829a6
--- /dev/null
+++ b/eps/hpr1941/hpr1941_full_shownotes.html
@@ -0,0 +1,217 @@
+
+
+
+
+
+
+
+ What's in my case (HPR Show 1941)
+
+
+
+
+
+
+
+
+
What's in my case (HPR Show 1941)
+
Dave Morriss
+
+
+
+
+
+
+
Table of Contents
+
+
+
Introduction
+
This might be a little bit of a cheat, but I have a leather pen case which I bought from China through eBay, and so I felt that this allowed me to add this show to the 'What's in my ...' series.
+
So, I'll come clean and admit that I'm a fountain pen geek. Actually, it's worse than that: I've also been an enthusiast of pens and stationery in general for all of my life. It seems to run in families, because my son is a pen enthusiast, so is my daughter (to a lesser extent) and so was my father before me. With this in mind I thought I'd talk about fountain pens on HPR in case there are any other Fountain Pen Geeks out there.
+
Why?
+
Why do I like writing this way?
+
When I was at school, at about age 7 or 8, in the mid 1950's we were taught to write with ink pens, having previously been using pencils. The school provided a good supply of dip pens, nibs, ink and blotting paper. The pens were quite primitive, just a wooden shaft with a nib holder into which a nib could be inserted. The nib was a scratchy thing, which needed some care or it dug into the paper. We were each allocated a sheet of blotting paper too. Each desk had a hole in it for an inkwell, into which fitted a white ceramic open-topped inkwell. Most of the desks had a slider which covered over the inkwell to stop it drying out. The designated Ink Monitor got to fill them each week out of a large stone bottle. The colour of the ink was always blue-black.
+
We were taught to write in the cursive style following the teachings of Marion Richardson. I guess this taught us good writing habits, though it didn't feel like that at the time.
+
Later, at High School, it was expected that we'd write either with a ballpoint pen, or (preferably) with a fountain pen. Fountain pens were fairly cool in those days, and quite a lot of people had them.
+
I think the ritual of buying or being bought a fountain pen was a sort of rite of passage during the transition to High School. I remember that there was much comparison of pens and degrees of rivalry over these pens in the school at that time.
+
I guess this history made me like writing with a fountain pen.
+
My experiences are not unique (of course). Many of my contemporaries and people older than myself who have been through the UK school system tend to be fountain pen users.
+
I have also found that many people I have encountered in the worlds of science and engineering are keen fountain pen users.
+
Why does anyone like writing this way?
+
It's generally thought that using a fountain pen leads to a better writing style:
+
+
Fountain pens can be easier to use for protracted periods since they usually require less pressure to write, and this results in less fatigue from long writing sessions
+
There are many nib styles, so finding one that suits your writing can give you a good writing experience, even though the hunt for that perfect combination can take a while.
+
Using a fountain pen can be more economical. There are many good basic fountain pens out there, with many nib types for very reasonable prices. There are also many inks available, so once you have found the combination you like you can continue using them very cheaply for a long time.
+
There is a certain novelty value in using one! There are signs of a resurgence of interest in fountain pens ("Return of the fountain pen").
+
+
Some Terminology
+
If you are not acquainted with the details of fountain pens I thought some explanation of a few terms might be helpful.
The writing part of the pen. Usually metal of various sorts. They come with different tip sizes varying from extra fine, to broad. There is a variety of tip shapes too.
+
+
Tine
+
A part of the pen nib. Usually a fountain pen nib has two tines by virtue of the fact that there is a split down the centre. This helps the ink flow to the end of the nib.
+
+
Barrel
+
The long part of the main body of the pen which houses the ink reservoir.
+
+
Feed
+
The part under the nib which connects to the ink reservoir. This delivers ink to the nib through one or more channels by gravity and capillary action.
+
+
Section or Grip Section
+
The part of the pen into which the nib and the feed fit, and which is usually gripped when writing.
+
+
Posting
+
Refers to the cap being removed from the nib end and placed on the end of the barrel. Sometimes necessary to balance a pen, but some writers prefer un-posted pens.
+
+
Cartridge
+
An interchangeable ink reservoir which is normally thrown away when it is empty. There are two main international standard sizes but some pen manufacturers use their own designs. Some people re-fill their cartridges with a blunt hypodermic.
+
+
Piston filler
+
The reservoir of the pen is filled with a piston mechanism which draws ink up through the nib. The mechanism is usually operated by turning a knob connected to a plunger, though some use a simpler push-pull action. The piston mechanism is usually an integral part of the pen.
+
+
Converter
+
A device used to convert a cartridge pen into a refillable pen. Converters usually contain piston filling mechanisms, either threaded or simpler push/pull devices.
A pen which is transparent, so its inner mechanism can be seen. These were originally just intended for salespeople to demonstrate the inner workings of the pens they were selling but the design caught on and is now quite popular.
+
+
+
My top 6 fountain pens
+
Compared to some collectors I have very few fountain pens. I currently have 19 usable pens, of which I use maybe 6 on a regular basis. I'll talk about my top 6 in this episode.
+
My style of writing seems to suit a fine nib, so I have tended to buy nibs classed as "Fine" or "Extra Fine". Pens which originate in Japan tend to have finer nibs than European pens, so a "Medium" Japanese nib is very similar to a European "Fine".
+
Pelikan Classic Range, Tradition M215 Diamonds
+
This model is also listed as the "Pelikan Tradition M215 Black/Silver Lozenge Fountain Pen" (see website). Mine was given to me as a Christmas present in 2012.
+
The Pelikan pen capped
+
The Pelikan pen uncapped
+
The Pelikan pen nib
+
Writing with the Pelikan pen
+
The one I have has an extra fine nib, and writes beautifully. The nib is very smooth and ink flow is superb. The pen has a metal body and cap. Note how the clip has the look of a Pelican's bill; this is a distinctive feature of this brand.
+
This is a fairly small pen, but that suits my writing style. It has a piston filling action with a knob at the end of the barrel. There is a small viewing window in the barrel through which you can see the amount of ink it currently contains.
+
Pelikan is a German company which makes a large range of pens. The top end of their range can be rather expensive, however.
+
TWSBI ECO Fountain Pen
+
This one is a quite a new model which I bought for myself in July 2015. It is currently the lowest price model offered by TWSBI, at around £30.
+
The TWSBI pen capped
+
The TWSBI pen uncapped
+
The TWSBI pen nib
+
Writing with the TWSBI pen
+
This pen is available in white or black. Mine is the black model with the "Extra Fine" nib.
+
The pen is a "Demonstrator" as explained above, and is made of acrylic resin. It is filled by a piston action. The piston is operated by a knob at the end of the barrel. One particular feature is that the entire pen can be disassembled for cleaning and maintenance; there is a plastic spanner provided with it to help with this as well some silicon grease for lubricating the piston.
+
The pen writes really well. It is very slightly scratchy, possibly due to my choice of the finest nib, but it is an excellent pen to use.
I have bought two Noodler's brand pens, and I like this one the best. It is an inexpensive pen made of transparent blue resin. The colour is apparently called "Hudson Bay Fathom". The resin material used for the pen has a slightly odd smell. There are other acrylic pens in the Konrad range but they are about twice the price. This one was around £17.
+
It has a medium-fine steel nib of an interesting design. Notice the split between the two tines extends a long way down the nib. This nib is described as a flex nib. That is, the width of the lines it produces varies with the amount of pressure applied to the paper. This feature can be good for producing calligraphic effects, though I do not use this capability myself. It is a smooth and pleasant nib to use nevertheless.
+
The pen is filled by a piston mechanism. It is necessary to unscrew a small cap from the end of the barrel to operate the knurled knob that moves the piston.
+
The Noodler's pen capped
+
The Noodler's pen uncapped
+
The Noodler's pen nib
+
Writing with the Noodler's pen
+
Noodler's pens are made by the Noodler's Ink company in the USA. They are unusual and slightly quirky. I really like them and the Konrad writes well.
+
Reform 1745
+
This pen was a birthday present I received from my son in 2012. It's a small pen with what I assume is a fine nib - there is no indication on the nib itself. It has a piston filling mechanism with a knob which operates it at the end of the barrel. There is an ink viewing window in the barrel.
+
The Reform pen capped
+
The Reform pen uncapped
+
The Reform pen nib
+
Writing with the Reform pen
+
This model Reform pen is a small light pen in green and black with a copper-coloured clip.
+
These pens were originally manufactured in Germany in the 1930's and 1940's and became very popular, especially with school children. At one time they were reported to be the best selling pens in the world, and manufacturing continued into the 1970's.
+
This model of pen is still available at quite low cost, though the quality of the nib is very variable. I have bought two more of these on eBay (around £12 each) to give as presents and found I needed to learn how to tune the nibs to give a good writing experience.
+
Pilot MR
+
I bought this pen from Amazon in July 2015 as an experiment to see what it was like. I had heard good things about it from various sources including fountain pen users on GnuSocial. Apparently it (or a version very like this European one) is sold as the Metropolitan in the USA.
+
The Pilot pen capped
+
The Pilot pen uncapped
+
The Pilot pen nib
+
Writing with The Pilot pen
+
The pen I bought has a medium nib as indicated by the 'M' on it, but as mentioned elsewhere, this equates to "fine" in Europe since Pilot is a Japanese manufacturer.
+
This is a cartridge pen, and it takes Pilot branded cartridges and international sized cartridges. I believe that the Pilot CON-50 converter can be used, but I do not have one at the moment. I found that the Waterman brand cartridge fits this pen, and this contains a larger amount of ink than the other options.
+
It's a great pen and a good writer with a smooth and pleasant nib. For the price of around £15 it is excellent value for money, and is easily obtained.
+
The Pilot Corporation is a Japanese pen manufacturer based in Tokyo, Japan.
+
J. Herbin Transparent Fountain Pen
+
This was another recent purchase in August 2015. I noticed that these manufacturers, J. Herbin, who are usually known for their inks, were selling a fountain pen and a ballpoint pen. The ballpoint is refillable with fountain pen ink.
+
I actually bought the ballpoint for my daughter, but she didn't like it. My son was keen to have it however and uses it often, he tells me.
+
The J. Herbin pen capped
+
The J. Herbin pen uncapped
+
The J. Herbin pen nib
+
Writing with the J. Herbin pen
+
This pen is very small, measuring only 10cm (4 inches) uncapped. It needs to be used posted I find, to make it a reasonable size. It has a fine nib and the body is transparent. It cost around £8 and the transparent rollerball was £5.
+
It takes international small cartridges, but I have managed to find a converter for it, though I haven't used it yet. The converter is made by Monteverde and is called the Mini Ink Converter.
+
The pen is very pleasant to use and writes well. I bought J. Herbin ink cartridges to use with it, and tried out a reddish colour called Terre de feu.
+
J. Herbin is a French company which sells inks, pens and other writing materials.
+
Pen Case
+
As I mentioned earlier, I bought a case for my pens. It was found on eBay and originated in China so it's not anything much. It's useful if I want to transport several pens at a time, but I don't keep my pens in it.
+
The pen case, zipped up
+
The pen case with one side populated
+
I would like to find a better storage device, such as a better quality case or perhaps a wooden box. However, having a pen case at least allowed me to make up a title for this show!
+
Starting as a fountain pen user
+
Some pens to choose from
+
If you are interested in becoming a fountain pen user yourself, here are a few suggested introductory pens for experimenting with this mode of writing:
+
+
Pilot VPen or V4. This is a Japanese disposable pen with a medium sized steel nib. It is available in multiple colours and has a large ink capacity. In the UK it costs around £4. In the United States (and other countries) it is known as the Varsity. I have one of these and find it to be excellent.
+
Platinum Preppy. This Japanese pen is made of clear polycarbonate. It is refillable with a proprietary cartridge, and several colours are available. An adaptor is available to allow it to use international standard small cartridges. It comes in three nib sizes: medium, fine and extra fine. In the UK this pen costs around £3. I have not yet tried this particular model.
+
J. Herbin Transparent Fountain Pen. This is another clear pen which takes international small cartridges or a converter, as discussed earlier. The cost is around £8 in the UK.
+
Pilot MR or Metropolitan. As already discussed. Around $15 in the US.
+
+
Paper and ink
+
These are two subjects which are very much related to the use of fountain pens. I will not go into detail here, but the choice of paper and ink can be important to improve your writing experience.
+
Choosing a good quality paper suitable for use with a fountain pen means that the pen will write smoothly, and the ink will not sink into the paper causing what is known as feathering. This is where the ink soaks into the fibres of the paper making the writing fuzzy and indistinct. Also, good quality paper can be written on both sides because the ink does not soak through, and the writing is not visible from the reverse side because the paper is thick enough. I like to use at least an 80gsm paper (gsm is grams per square metre, a measure of paper density).
+
In the case of inks, there are huge numbers to choose from, with a vast range of colours. It is important to choose inks specifically designed for a fountain pen; there are inks designed for dip pens which will clog a fountain pen. A good basic ink made by a company such as Parker or Waterman is a good choice for people starting to use a fountain pen. There is a large market in fountain pen inks, some of which can be expensive.
+
+
diff --git a/eps/hpr1946/hpr1946_Quorn_chicken_pieces.png b/eps/hpr1946/hpr1946_Quorn_chicken_pieces.png
new file mode 100755
index 0000000..bc5b8cb
Binary files /dev/null and b/eps/hpr1946/hpr1946_Quorn_chicken_pieces.png differ
diff --git a/eps/hpr1946/hpr1946_bean_sprouts.png b/eps/hpr1946/hpr1946_bean_sprouts.png
new file mode 100755
index 0000000..c647d53
Binary files /dev/null and b/eps/hpr1946/hpr1946_bean_sprouts.png differ
diff --git a/eps/hpr1946/hpr1946_chilli_sauce.png b/eps/hpr1946/hpr1946_chilli_sauce.png
new file mode 100755
index 0000000..0f2b10b
Binary files /dev/null and b/eps/hpr1946/hpr1946_chilli_sauce.png differ
diff --git a/eps/hpr1946/hpr1946_full_shownotes.html b/eps/hpr1946/hpr1946_full_shownotes.html
new file mode 100755
index 0000000..2e1fb0c
--- /dev/null
+++ b/eps/hpr1946/hpr1946_full_shownotes.html
@@ -0,0 +1,181 @@
+
+
+
+
+
+
+
+ Wok Cookery (HPR Show 1946)
+
+
+
+
+
+
+
+
+
Wok Cookery (HPR Show 1946)
+
Dave Morriss
+
+
+
+
+
+
+
Table of Contents
+
+
+
Introduction
+
Not for the first time I'm following in the footsteps of Frank Bell. Frank did an HPR episode entitled "A Beginner with a Wok", episode number 1787, on 2015-06-09. On it he spoke about his experiences stir-fry cooking using a wok.
+
Frank got a lot of comments about his episode and there seemed to be an interest in the subject. I have been interested in Chinese, Indonesian and other Far Eastern cookery styles for some time, and do a lot of cooking, so I thought I'd record a show about one of the recipes I use.
+
My son visits around once a week and eats dinner with me. I offered to cook him my version of Chow Mein, which since he is vegetarian, needed to use no meat. This is my description of the recipe I used.
+
I loosely based this version of Chow Mein on Ken Hom's recipe in his book Chinese Cookery, page 226. This is from his 1984 BBC TV series, which I watched. I also learnt many of my preparation techniques from Ken Hom's books and TV shows.
+
Preparing the ingredients
+
I prepared enough for about six servings in this case. My son often visits twice per week, and I like to cook generous quantities of food so there is enough for us both and left-overs for a few days thereafter.
+
It's my experience that cutting all of the ingredients for a stir-fry into similar sizes is desirable, to ensure they all cook in the allotted time. I have also been advised to use diagonal cutting when preparing harder root and stalked vegetables because it shortens the longer fibres, increases surface area, and helps the pieces to cook faster. I'm not entirely sure that this is the case, but I enjoy chopping vegetables and have got into the habit of doing it this way!
+
I now use a large cook's knife for chopping vegetables. I have a few Chinese cleavers of various sizes, but I find that a standard, properly sharpened cook's knife does the job very well.
+
Carrots
+
I used around six medium-sized carrots for this recipe. I always prepare carrots in the way I have shown here. This is partly for uniformity and speed of cooking and partly for aesthetic reasons.
+
+Slice the carrots diagonally
+
+The slices should be moderately thick, about 5mm
+
+Cut the slices into sticks about 5mm wide
+
+All carrots cut into sticks
+
Celery
+
I used around six or seven sticks of celery for this dish. I removed the top and trimmed the bottom of each stalk. I peeled the convex surface with a potato peeler to remove the larger fibres if they seemed a bit coarse.
+
+Cut each stalk into manageable pieces
+
+Cut lengthwise if the pieces are large
+
+Cut diagonally into sticks about 5mm thick
+
+All the celery cut into sticks
+
Beans
+
These are so-called French beans, the name commonly used in the UK. The ones found in UK supermarkets in November are often imported from Egypt. They are plain green or stringless beans. I never buy them pre-trimmed since the trimmed ends tend to dry out and go brown.
+
+Top and tail the beans
+
The process of rolling the beans 180° between cuts that I used here is really only done for aesthetic reasons. So-called roll-cutting can be used to some effect with larger vegetables such as courgettes and large carrots.
+
+Cut diagonally into manageable pieces, rolling between cuts
+
+All beans cut up
+
Mange tout
+
The peas I used are sold as Mange tout (French for "eat all") in Scotland, but are probably Snow peas since they are flat.
+
+Top and tail the mange tout and cut them diagonally into convenient pieces
+
+All mange tout cut into pieces
+
Peppers
+
I cut peppers the way shown here for stir-fries when I have a lot of long, thin ingredients. There are many alternative ways of preparing them. Leaving them in quite large chunky pieces is often seen in Chinese food.
+
+Peppers quartered vertically
+
+Peppers cored, halved horizontally and sliced vertically
+
+All peppers cut into pieces
+
Onions
+
I learned this way of cutting onions for a stir-fry from various Asian friends. Again it results in pieces of a similar dimension to the other ingredients. I have seen some cooks separate all of the layers of the slices to ensure that they are well distributed in the resulting dish.
+
+Onions peeled, topped and tailed and cut in half vertically
+
+Onions sliced vertically into strips about 3mm wide
+
+All onions cut into pieces
+
Garlic
+
I love garlic and always use an entire bulb for anything I'm making. Some people chop their garlic up finely for a stir-fry, but I never do because I think the flavour tends to get lost if I do.
+
+Garlic cloves, trimmed and peeled
+
+All garlic cut into thin slices
+
Mushrooms
+
How you prepare mushrooms really depends on how big they are. I used a variety which is sold in the UK as Chestnut mushrooms. These are a brown colour and have more flavour than the usual white mushrooms.
+
I have cut them fairly small in this recipe to be similar in size to the other ingredients.
+
+Button mushrooms washed, cut in half vertically and sliced
+
Other ingredients
+
Bean sprouts are one of the usual constituents of Chow Mein, so I have included them here.
+
+Bean sprouts
+
Quorn is a meat substitute, and this particular form resembles chunks of chicken meat. I used two 300 gram bags for this recipe. To prepare it for adding to the stir-fry I stir-fried it on its own from frozen in a little oil over a medium heat. This thaws it out and browns the outside slightly, and adds to the flavour in the process.
+
+Quorn chicken pieces (frozen)
+
I used medium egg noodles for this recipe, cooking enough for four people in this case - that's four blocks for this particular brand. These noodles need to be softened by simmering in boiling water for 4 minutes, and draining. I then added soy sauce and a little sesame oil to stop them sticking together.
+
+Medium egg noodles
+
+Softened noodles
+
I have shown the rice wine I used to flavour the stir-fry in the earlier stages. I also added a fairly liberal amount of soy sauce to the cooking ingredients, and finished off with sesame oil for flavouring. This should not be added too soon because it burns.
+
+Rice wine, soy sauce and sesame oil
+
Cooking
+
I used my large two-handled wok for this. I have three, though one is actually meant for making tempura. The large wok is 18½ inches in diameter (about 47cm), has a round bottom and is made of stainless steel. Even though it has a round bottom the shallow shape allows it to balance on my gas hob quite well. A deeper wok would tend to tip over I find.
+
I used a stainless steel wok spatula for stirring the ingredients. Some professional cooks use a wok ladle for this, but I find the shovelling action of the spatula better.
+
To cook this Chow Mein I used peanut oil. I put the gas burner on full and when the oil was hot I began with the onions and garlic. These ingredients flavour the oil and much Far Eastern cookery starts with them.
+
I then added the ingredients which require the most cooking: the carrots, celery and beans. These were stir-fried with very frequent stirring for about five minutes. I added rice wine (about a tablespoon) and light soy sauce at this stage.
+
+Cooking the carrot, celery, and beans
+
I then added the next set of ingredients: the peppers, mange tout, mushrooms and the Quorn (which had been previously cooked, as described earlier). If this had been a non-vegetarian version and had used chicken, I would have pre-cooked the chicken and added it at this point. This stage cooked for about five minutes.
+
+Added the peppers, mange tout, mushrooms and the Quorn
+
Now the noodles could be added. These require a lot of mixing, which is why I used a large wok. Since they are cold by the time they are added they need time to warm through.
+
+Added the noodles
+
Now the bean sprouts could be added. Again, these need to be well mixed because they need to warm through and be lightly cooked. I added sesame oil at this stage too so that it would coat everything and flavour it.
+
+Added the bean sprouts
+
By now the carrots, celery and beans should be well cooked. Ideally they should still have a bit of firmness to them for a crunchy result. The bean sprouts should be slightly wilted but not over cooked, and the whole dish should have been heated through.
+
+The end result
+
Both my son and I are very keen on all sorts of chilli sauces, so this meal was eaten with a Chinese sauce with the brand name Laoganma which I get from my local Chinese Supermarket.
+
+Laoganma chilli sauce
+
The picture shows the Chinese symbols I use to identify this particular sauce, since it's quite hard to do so otherwise, I have found. Be aware that this sauce contains peanuts.
There is also another, process substitution, which occurs after arithmetic expansion on systems that can implement it.
+
We will look at one more of these expansion types in this episode but since there is a lot to cover, we'll continue this subject in a later episode.
+
Note
+
For this episode I have changed the convention I am using for indicating commands and their output to the following:
+
$ echo "Message"
+Message
+
The line beginning with a $ is the command that is typed and the rest is what is returned by the command.
+
It was pointed out to me that there was ambiguity in the examples in previous episodes, for which I apologise.
+
Arithmetic expansion
+
This form of expansion evaluates an arithmetic expression and returns the result. The format is:
+
$((expression))
+
So an example might be:
+
$ echo $((42/5))
+8
+
This is integer arithmetic; the fractional part is simply thrown away.
+
To digress: if you want the full fractional answer then using the bc command would probably be wiser. This was covered in Dann Washko's "Linux in the Shell" series in HPR show number 1202.
+
For example, using bc in command substitution as in:
+
$ echo $(echo "scale=2; 42/5" | bc)
+8.40
+
The "scale=2" is required to make bc output the result with two decimal places. By default it does not do this.
+
Note that using echo to report the result of this command sequence is not normally useful. It is used here just to demonstrate the point. Writing something like the following makes more sense in a script:
The expressions allowed by Bash in arithmetic expansion include the use of variables. Normally these variables are written as just the plain name without the leading '$', though adding this is permitted. For example:
+
$ x=42
+$ echo $((x/5))
+8
+$ echo $(($x/5))
+8
+
There are potential pitfalls with using the '$' however, as we will see. The expression is subject to variable expansion, so in the second example above $x becomes 42 and the expression resolves to 42/5.
+
If a variable is null or unset (and is used without the leading '$') then it evaluates to zero. This is another reason not to use the parameter substitution method.
+
The value of a variable is always interpreted as an integer. If it is not an integer (for example, if it's a text string) then it is treated as zero.
Bash also interprets non-decimal numerical constants (as in the second example). For a start, any number beginning with a zero is taken to be octal, and hexadecimal numbers are denoted by a leading 0x or 0X.
+
Be aware that the way in which octal constants are written lead to unexpected outcomes:
+
$ x=010
+$ echo $((x))
+8
+$ x=018
+$ echo $((x))
+bash: 018: value too great for base (error token is "018")
+$ printf -v x "%03d\n" 19
+$ echo $((x))
+bash: 019: value too great for base (error token is "019")
+
There is also a complete system of defining numbers with bases between 2 and 64. Such numbers are written as:
+
base#number
+
If the 'base#' is omitted then base 10 is used (or the octal and hexadecimal conventions above may be used).
+
Like in hexadecimal numbers, other characters are used to show the digits of other bases. These are in order 'a' to 'z', 'A' to 'Z' and '@' and '_'.
+
The contexts in which these number formats are understood by Bash are limited. Consider the following:
Bash has not converted these values, but has treated them like strings.
+
It is possible to declare a variable as an integer (and set its value) thus:
+
$ declare -i II=16#F
+$ echo $II
+15
+
In this case the base 16 number has been converted.
+
There is also a let command that will evaluate such numeric constants:
+
$ let x=16#F
+$ echo $x
+15
+
Alternatively, using arithmetic expansion syntax causes interpretation to take place:
+
$ x=16#F
+$ echo $((x))
+15
+
The following loop could be used to examine the decimal values 0..64 using base64 notation. I have written it here as a short Bash script which could be placed in a file:
+
#!/usr/bin/env bash
+
+for x in {0..9} {a..z} {A..Z} @ _; do
+ n="64#$x"
+ echo "$n=$((n))"
+done
There is more than can be said about this, but I will leave you to explore. I could possibly talk about this subject in another episode if there is any interest.
+
Examples of Arithmetic Evaluation
+
The way in which the arithmetic expression in an arithmetic expansion is interpreted is defined in the Bash manpage under the ARITHMETIC EVALUATION heading. A copy of this is included at the end of these notes.
+
The use of arithmetic evaluation in Bash is quite powerful but has some problems. I could devote a whole episode to this subject, but I will restrict myself in this episode. I prepared a few examples of some of the operators which I hope will give some food for thought.
+
Pre- and post-increment and decrement
+
These operators increment or decrement the contents of a variable by 1. The pre- and post- effects control whether the operation is performed before or after the value is used.
Note that the pre- and post-increment and decrement operators need variables, not numbers. That means that the following is acceptable, as we saw:
+
$ myvar=128
+$ echo $((++myvar))
+129
+
However, placing a $ in front of the variable name causes substitution to take place and its contents to be used, which is either not acceptable or leads to unwanted effects:
Also be aware that the following expression is not illegal, but is potentially confusing:
+
$ myvar=128
+$ echo $((--$myvar))
+128
+
It substitutes the value of myvar which then has two '-' signs in front of it: --128. The effect is the same as -(-128), in other words, the two minus signs cancel one another. Plenty of scope for confusion!
+
Unary minus
+
$ int=16
+$ echo $((-int))
+-16
+
This example just turns 16 into minus 16, as you would expect. We already saw this when discussing the pre-decrement operator.
+
Exponentiation
+
$ echo $((int**2))
+256
+
Here we compute 162.
+
More complex expressions with parentheses
+
$ echo $(( (int**2 + 3) % 5 ))
+4
+
This adds 3 to the result of 162 (259) then returns the remainder after division by 5. We need the parentheses to prevent the % (remainder) operator applying to the 3.
Since 16 is binary 10000, shifting it to the right once returns 1000 which is the binary representation of decimal 8.
+
Shifting 16 to the left once returns 100000 which is 32 in decimal.
+
Taking 16 shifted left 1 (32) and binary OR'ing 8 to it is the same as binary 100000 OR 01000 which is 101000, which is decimal 40.
+
The same calculation printed as hexadecimal is 28, which can be visualised in binary as 0010 1000. The printf format %#x prints numbers in hexadecimal with a leading 0x.
+
+
Conditional operator
+
The conditional operator is similar to the equivalent in C and many other languages but only operates with integer values.
+
$ myvar=$((int<<3))
+$ msg=('under 100' 'between 100 and 200' 'over 200')
+$ range=$((myvar>100?$((myvar>200?2:1)):0))
+$ echo "myvar=$myvar, range=$range, message: ${msg[$range]}"
+myvar=128, range=1, message: between 100 and 200
+
Here myvar is set to 16 shifted left 3 places, which is the same as multiplying it by 2 3 times, resulting in 128.
+
We declare an array msg which holds three text strings (index 0, 1 and 2).
+
Then range is set to the result of a complex expression. If myvar is greater than 100 then the second arithmetic expansion is used which tests to see if myvar is greater than 200. If it is then the result returned is 2, otherwise 1 is returned. If the value of myvar is less than 100 then 0 is returned.
+
So a value of 0 means "under 100", 1 means "between 100 and 200" and 2 means "over 200". The echo reports the values of myvar and range and uses range to index the appropriate element of the array msg (we looked at array indexing in episode 1684).
Here we set myvar to 16 shifted left 4 places, or in other words 16 times 24, which is 256. We then recalculate the value of range and use the same echo as before.
+
These are not particularly robust examples of conditional expressions, but hopefully they serve to make the point.
+
Assignment
+
Variable assignments may be performed in these arithmetic expressions. The assignment operators also include combinations with arithmetic operators:
In this example x is set to 2012 ("two zero base 12") which is decimal 24. This is then multiplied by 2 and saved back into x, then the remainder of division by 5 is saved in x.
+
$ echo $((b=2#1000))
+8
+$ echo $((b|=2#10))
+10
+
Here the number 10002 ("one zero zero zero base 2") is saved in b, which is decimal 8. This is then bitwise OR'ed with 102 ("one zero base 2") which is 2. The result saved in b is decimal 10 (or 10102).
+
There is no simple way of printing binary numbers in Bash. If you have difficulty in visualising them, you can use bc as follows:
+
$ echo "obase=2;$b" | bc
+1010
+
The obase variable in bc defines the output number base.
Expansion is performed on the command line after it has been split into words. There are seven kinds of expansion performed: brace expansion, tilde expansion, parameter and variable expansion, command substitution, arithmetic expansion, word splitting, and pathname expansion.
+
The order of expansions is: brace expansion; tilde expansion, parameter and variable expansion, arithmetic expansion, and command substitution (done in a left-to-right fashion); word splitting; and pathname expansion.
+
On systems that can support it, there is an additional expansion available: process substitution. This is performed at the same time as tilde, parameter, variable, and arithmetic expansion and command substitution.
+
Only brace expansion, word splitting, and pathname expansion can change the number of words of the expansion; other expansions expand a single word to a single word. The only exceptions to this are the expansions of "$@" and "${name[@]}" as explained above (see PARAMETERS).
Arithmetic expansion allows the evaluation of an arithmetic expression and the substitution of the result. The format for arithmetic expansion is:
+
$((expression))
+
The old format $[expression] is deprecated and will be removed in upcoming versions of bash.
+
The expression is treated as if it were within double quotes, but a double quote inside the parentheses is not treated specially. All tokens in the expression undergo parameter and variable expansion, command substitution, and quote removal. The result is treated as the arithmetic expression to be evaluated. Arithmetic expansions may be nested.
+
The evaluation is performed according to the rules listed below under ARITHMETIC EVALUATION. If expression is invalid, bash prints a message indicating failure and no substitution occurs.
+
+
ARITHMETIC EVALUATION
+
The shell allows arithmetic expressions to be evaluated, under certain circumstances (see the let and declare builtin commands and Arithmetic Expansion). Evaluation is done in fixed-width integers with no check for overflow, though division by 0 is trapped and flagged as an error. The operators and their precedence, associativity, and values are the same as in the C language. The following list of operators is grouped into levels of equal-precedence operators. The levels are listed in order of decreasing precedence.
+
+
id++id--
+
variable post-increment and post-decrement
+
+
++id --id
+
variable pre-increment and pre-decrement
+
+
- +
+
unary minus and plus
+
+
! ~
+
logical and bitwise negation
+
+
**
+
exponentiation
+
+
* / %
+
multiplication, division, remainder
+
+
+ -
+
addition, subtraction
+
+
<< >>
+
left and right bitwise shifts
+
+
<= >= < >
+
comparison
+
+
== !=
+
equality and inequality
+
+
&
+
bitwise AND
+
+
^
+
bitwise exclusive OR
+
+
|
+
bitwise OR
+
+
&&
+
logical AND
+
+
||
+
logical OR
+
+
expr?expr:expr
+
conditional operator
+
+
= *= /= %= += -= <<= >>= &= ^= |=
+
assignment
+
+
expr1 , expr2
+
comma
+
+
+
+
Shell variables are allowed as operands; parameter expansion is performed before the expression is evaluated. Within an expression, shell variables may also be referenced by name without using the parameter expansion syntax. A shell variable that is null or unset evaluates to 0 when referenced by name without using the parameter expansion syntax. The value of a variable is evaluated as an arithmetic expression when it is referenced, or when a variable which has been given the integer attribute using declare -i is assigned a value. A null value evaluates to 0. A shell variable need not have its integer attribute turned on to be used in an expression.
+
Constants with a leading 0 are interpreted as octal numbers. A leading 0x or 0X denotes hexadecimal. Otherwise, numbers take the form [base#]n, where the optional base is a decimal number between 2 and 64 representing the arithmetic base, and n is a number in that base. If base# is omitted, then base 10 is used. When specifying n, the digits greater than 9 are represented by the lowercase letters, the uppercase letters, @, and _, in that order. If base is less than or equal to 36, lowercase and uppercase letters may be used interchangeably to represent numbers between 10 and 35.
+
Operators are evaluated in order of precedence. Sub-expressions in parentheses are evaluated first and may override the precedence rules above.
In late 2013 I noticed the local Edinburgh Hacklab were offering soldering courses building a BlinkStick. I offered to sign my son Tim up to the next course since he wanted to learn to solder. He couldn't afford the time at that point, but we agreed to buy some BlinkSticks to build at home.
+
This episode describes some of our experiences with building and using the device.
+
The version we bought and built was the v1.0 release, since that and the BlinkStick Pro were all that was available. The base version now available is v1.1, and there are several other products available from the manufacturer in addition to these. The company is called Agile Innovative Ltd, based in the UK.
+
Building the kit
+
The v1.0 BlinkStick kit came with a nice quality PCB, a male USB type A connector, resistors, diodes and capacitors, an RGB LED and a socketed ATtiny85 micro-controller.
+
+Components of the BlinkStick
+
The build instructions were great and very easy to follow. We had the suggested helping hands as you can see in the picture, as well as side and end cutters. The side cutters were better in general.
+
We added the six resistors first as recommended. For a first-timer this allows some chances to learn about bending the wires, properly and soldering neatly.
+
Note: I don't know which picture belongs to which BlinkStick in this group. Some are mine, and some are Tim's. He's now an excellent solderer and has recently taught himself how to do Surface Mount soldering, which is more than I can do!
+
+Adding resistor number 1, before cutting the wires
+
+Now resistor 2 has been added, as seen from the top of the board
+
+Now resistors 3, 4 and 5 have been added
+
The polarity of the Zener diodes is important as was made clear from the instructions.
+
+Now we have resistor 6 and have added the two diodes
+
Next the capacitors were added, again being careful about polarity. This was not difficult given the excellent instructions. We didn't take any pictures of this stage.
+
This was followed by fitting the USB plug which needs to be anchored to the board and soldered to the circuit, making sure it is straight.
+
+The underside of the board while the USB plug is being fitted
+
Next the IC (integrated circuit) socket needed to be soldered on ready for the ATtiny85 chip. Then the RGB could be added. It was a little difficult to get this level and close to the board as you can see from the next picture.
+
+The RGB LED close up
+
Finally, the ATtiny85 chip was inserted into the socket, making sure to get the orientation right. This completed the board as can be seen in the final image.
+
+The finished board side view
+
+The finished board from above
+
The software
+
There are Python, Ruby and Node.js versions of the software available on GitHub.
Just before preparing these notes I updated the software since it had been a while since I originally installed it:
+
$ sudo pip install --update blinkstick
+
Version 1.1.8 is the latest release at the time of writing.
+
This provides a command-line interface through the blinkstick command.
+
There is no manual page, but details of how to run the command can be obtained with the command:
+
$ blinkstick --help
+
Access to the BlinkStick device normally requires root access but the blinkstick command has the capability of creating a udev rule thus:
+
$ sudo blinkstick --add-udev-rule
+
Thereafter root permissions are not required. The rule is placed in the file:
+
/etc/udev/rules.d/85-blinkstick.rules
+
Note: In the audio I said that the BlinkStick device is visible as a /dev/* device. This is not strictly true and probably misleading. It can be seen when using the lsusb command, but is not a mounted device that can be seen with df, which is what I implied.
+
Some of the features of the blinkstick command are:
+
--brightness=LIMIT Limit the brightness of the color 0..100
+--limit=LIMIT Alias to --brightness option
+--set-color=COLOR Set the color for the device. This can also be the
+ last argument for the script. The value can either be
+ a named color, hex value, 'random' or 'off'. CSS color
+ names are defined http://www.w3.org/TR/css3-color/
+ e.g. red, green, blue. Specify color using hexadecimal
+ color value e.g. 'FF3366'
+--inverse Control BlinkSticks in inverse mode
+
+--blink Blink LED (requires --set-color or color set as last
+ argument, and optionally --delay)
+--pulse Pulse LED (requires --set-color or color set as last
+ argument, and optionally --duration).
+--morph Morph to specified color (requires --set-color or
+ color set as last argument, and optionally --duration).
+--duration=DURATION Set duration of transition in milliseconds (use with
+ --morph and --pulse).
+--delay=DELAY Set time in milliseconds to light LED for (use with
+ --blink).
+--repeats=REPEATS Number of repetitions (use with --blink and --pulse).
+
Details of the command line interface are available in the Python Wiki.
+
Simple use might be the following which sets the BlinkStick colour to blue:
+
$ blinkstick blue
+
By default the brightness is at the maximum, 100, but it can be turned down like this:
+
$ blinkstick --brightness=30 green
+
So, for example, in a script it is possible to make it pulse alternately red and green for 20 iterations like this:
+
for i in {1..20}
+do
+ blinkstick --pulse red
+ sleep 0.5
+ blinkstick --pulse green
+ sleep 0.5
+done
+
You can have more than one BlinkStick plugged in at any time. To address them you need to find their serial numbers:
Then to refer to a specific BlinkStick use the --serial=SERIAL option:
+
$ blinkstick --serial=BS000473-1.0 white
+
There is also a Python programming interface, as mentioned, and more information about this (with example code) can be found on the Python Wiki referred to earlier.
+
My use of the BlinkStick
+
My current project using the BlinkStick is quite simple. As an HPR volunteer I wrote a web scraper script to spot when a new show gets submitted and appears on the calendar page. I want to know this so I can run the show notes through the various scripts I am developing.
+
I run the scraper out of cron where it performs a check every 30 minutes. If there is something for me to check out I make the script generate a sound, write a pop-up message on screen and turn on the BlinkStick choosing the red colour.
+
I have a 7-port powered hub on my desk, and I leave the BlinkStick connected to this so that if the light comes on it's very obvious.
+
The script, called cronjob_scrape, is available on GitLab.
+
BlinkStick Pro
+
In 2014 I also bought a BlinkStick Pro from Agile Innovative. This device does not have any LEDs itself but can control a wide variety of other LED systems. I have not yet built this device and I don't have any specific projects for it.
+
I do have an Adafruit Neopixel 24-LED ring which I may use with this, but I would quite like to buy one of these reels of RGB LEDs on a self-adhesive strip and control it with the Pro.
+
I plan to do another HPR episode on the Pro when I have built it and set it up.
+
+
diff --git a/eps/hpr1971/hpr1971_img_001.png b/eps/hpr1971/hpr1971_img_001.png
new file mode 100755
index 0000000..fdcc4cb
Binary files /dev/null and b/eps/hpr1971/hpr1971_img_001.png differ
diff --git a/eps/hpr1971/hpr1971_img_002.png b/eps/hpr1971/hpr1971_img_002.png
new file mode 100755
index 0000000..95cb07a
Binary files /dev/null and b/eps/hpr1971/hpr1971_img_002.png differ
diff --git a/eps/hpr1971/hpr1971_img_003.png b/eps/hpr1971/hpr1971_img_003.png
new file mode 100755
index 0000000..89d1f61
Binary files /dev/null and b/eps/hpr1971/hpr1971_img_003.png differ
diff --git a/eps/hpr1971/hpr1971_img_004.png b/eps/hpr1971/hpr1971_img_004.png
new file mode 100755
index 0000000..f3d4ec6
Binary files /dev/null and b/eps/hpr1971/hpr1971_img_004.png differ
diff --git a/eps/hpr1971/hpr1971_img_005.png b/eps/hpr1971/hpr1971_img_005.png
new file mode 100755
index 0000000..e3b36d6
Binary files /dev/null and b/eps/hpr1971/hpr1971_img_005.png differ
diff --git a/eps/hpr1971/hpr1971_img_006.png b/eps/hpr1971/hpr1971_img_006.png
new file mode 100755
index 0000000..ef85b4c
Binary files /dev/null and b/eps/hpr1971/hpr1971_img_006.png differ
diff --git a/eps/hpr1971/hpr1971_img_007.png b/eps/hpr1971/hpr1971_img_007.png
new file mode 100755
index 0000000..aa399d1
Binary files /dev/null and b/eps/hpr1971/hpr1971_img_007.png differ
diff --git a/eps/hpr1971/hpr1971_img_008.png b/eps/hpr1971/hpr1971_img_008.png
new file mode 100755
index 0000000..30a52cc
Binary files /dev/null and b/eps/hpr1971/hpr1971_img_008.png differ
diff --git a/eps/hpr1971/hpr1971_img_009.png b/eps/hpr1971/hpr1971_img_009.png
new file mode 100755
index 0000000..c1c4962
Binary files /dev/null and b/eps/hpr1971/hpr1971_img_009.png differ
diff --git a/eps/hpr1976/hpr1976_full_shownotes.html b/eps/hpr1976/hpr1976_full_shownotes.html
new file mode 100755
index 0000000..e7e0845
--- /dev/null
+++ b/eps/hpr1976/hpr1976_full_shownotes.html
@@ -0,0 +1,261 @@
+
+
+
+
+
+
+
+ Introduction to sed - part 1 (HPR Show 1976)
+
+
+
+
+
+
+
+
+
Introduction to sed - part 1 (HPR Show 1976)
+
Dave Morriss
+
+
+
+
+
+
+
Table of Contents
+
+
+
Introduction
+
sed is an editor which expects to read a stream of text, apply some action to the text and send it to another stream. It filters and transforms the text along the way according to instructions provided to it. These instructions are referred to as a sed script.
+
The name "sed" comes from Stream Editor, and sed was developed from 1973 to 1974 as a Unix utility by Lee E. McMahon of Bell Labs. GNU sed added several new features including better documentation, though most of it is only available on the command line through the info command. The full manual is of course available on the web.
+
Using sed
+
The sed command is usually invoked with a sed script and an input file on the command line. You might see:
+
$ sed -e 's/old/new/' infile > outfile
+
In this example the -e introduces the sed script which is enclosed in single quotation marks. The file infile is read and edited. The result is written to standard output which in this case is being redirected to a file called outfile.
+
In this episode the sed examples are often being applied to a small file of text, containing the following lines copied from the "about" page on the HPR site:
+
Hacker Public Radio (HPR) is an Internet Radio show (podcast) that releases
+shows every weekday Monday through Friday. HPR has a long lineage going back to
+Radio FreeK America, Binary Revolution Radio & Infonomicon, and it is a direct
+continuation of Twatech radio. Please listen to StankDawg's "Introduction to
+HPR" for more information.
+
+What differentiates HPR from other podcasts is that the shows are
+produced by the community - fellow listeners like you. There is no
+restrictions on how long the show can be, nor on the topic you can
+cover as long as they "are of interest to Hackers". If you want to see
+what topics have been covered so far just have a look at our Archive.
+We also allow for a series of shows so that host(s) can go into more
+detail on a topic.
+
The file sed_demo1.txt is available on the HPR site.
+
If the input file is missing sed expects its input to come from standard input so you might see a pipeline such as:
+
$ wc -l sed_demo1.txt | sed -e 's/ .*$//'
+
Here the wc command counts the lines in sed_demo1.txt and normally reports the number and the filename:
+
$ wc -l sed_demo1.txt
+13 sed_demo1.txt
+
We remove the filename using sed leaving just the number - 13. We'll be looking at how this sed example works later.
+
Note: using wc the way shown below is a simpler way of solving this problem:
+
$ wc -l < sed_demo1.txt
+13
+
Options
+
Some of the most frequently used options to the sed command are:
+
+
-e SCRIPTor--expression=SCRIPT
+
Defines the sed commands to be executed (the sed "script"). There can be multiple such options.
+
+
-f SCRIPT-FILEor--file=SCRIPT-FILE
+
Defines a file of sed commands. There can be multiple files, and these can be combined with scripts on the command-line as well.
+
+
--help
+
Displays help information and exits
+
+
+
If no -e, --expression, -f, or --file option is given, then the first non-option argument is taken as the sed script to interpret. All remaining arguments are names of input files; if no input files are specified, then the standard input is read.
+
How sed works
+
We will just look at the basics of how sed uses commands to process incoming data in this episode. We will look into this subject in more depth in later episodes.
+
As mentioned under Optionssed takes in commands or scripts from the command line or from files, and stores them.
+
It then processes the data it has been given through input files or piped to it on STDIN. It reads this input one line at a time, placing it in what is referred to as the pattern space.
+
Then sed runs the saved commands on the pattern space. The full range of available commands is such that they can be conditional, but we'll leave these details until a later episode. The commands may change the data in the pattern space.
+
Once all the commands have been executed the contents of the pattern space are printed, the pattern space cleared and the next line is read.
+
The printing of the pattern space is the default behaviour but can be overridden as we will see in a later episode.
+
Simple sed scripts (the s command)
+
The commonest sed command is the s (substitute) command. It has the structure:
+
s/REGEXP/REPLACEMENT/FLAGS
+
Its purpose is to look for a pattern (REGEXP) and, if found, to replace it (with REPLACEMENT). The real power of sed (and other parts of Linux and Unix) is in the type of pattern called a regular expression (regexp for short).
+
We need to look at the fundamentals of regular expressions to appreciate the sophistication of what can be done.
+
The FLAGS part is used to modify the behaviour of the command. We'll look at one commonly-used flag in this episode but will reserve the full range for later episodes.
+
Simple Regular Expressions
+
Regular expressions are patterns which are used to match a string. We will begin by looking at some of the simplest forms.
+
A regular expression is a sort of language in which certain characters have special meanings. The following table shows some of the simpler meta characters used by sed. We will look into these in more detail in a later episode.
+
+
+
+
Expression
+
Meaning
+
+
+
+
+
any character
+
A single ordinary character matches itself
+
+
+
.
+
Matches any character
+
+
+
*
+
Matches a sequence of zero or more instances of the preceding item
+
+
+
[list]
+
Matches any single character in list: for example, [aeiou] matches all vowels
+
+
+
[^list]
+
A leading '^' reverses the meaning of list, so that it matches any single character not in list
+
+
+
^
+
Matches the beginning of the line (anchors the search at the start)
+
+
+
$
+
Matches the end of the line (anchors the search at the end)
+
+
+
+
+
Simple character matching
+
The simplest form of match is where a particular sequence of characters is being searched for. So, the regexp 'abc' matches any string which contains the characters 'abc' in that order.
+
s/abc/def/
+
This will find the first occurrence of 'abc' and will change it to 'def'.
+
Matching arbitrary characters
+
Using the '.' (dot) character, which matches any character, we could search and change 'abc' or 'aac' of any other three character string beginning with 'a' and ending with 'c' like so:
+
s/a.c/def/
+
If it is necessary to indicate an actual '.' character then it needs to be escaped by preceding it with a '\' (backslash) character. This indicates that its special regexp meaning is not to be used in this instance.
+
s/17\.30/17:30/
+
Zero or more of the preceding
+
Using the '*' character we can match sequences of variable length. So, if it is necessary to match 'bc', 'abc', 'aabc' or 'aaabc', for example, then the following could be used:
+
s/a*bc/def/
+
What this indicates is that the 'a' can occur zero or more times, followed by the 'bc'. So, the '*' indicates that we are searching for zero or more instances of the preceding item.
+
If it is necessary to indicate an actual '*' character then it needs to be escaped by preceding it with a '\' (backslash) character. This indicates that its special regexp meaning is not to be used in this instance.
+
Matching characters in or not in a set
+
Using the '[list]' expression we can match one of the characters in the given list. So, for example to match 'c' followed by any vowel, followed by 't' and replace it by 'dog' we could use:
+
s/c[aeiou]t/dog/
+
This will find all instances of 'cat', 'cet', 'cit', 'cot' and 'cut' and will replace them with 'dog'.
+
The other form of this expression '[^list]' matches any character not in the given list.
+
s/([^)]*)/(example 1)/
+
This is a common type of expression used in sed and elsewhere that you might find regular expressions. Here we are matching an open parenthesis followed by any characters which are not a close parenthesis followed by a close parenthesis. We replace what we find by the text '(example 1)'. This regexp will match any number of enclosed characters including zero. Note that the open and the close parentheses must be on the same line in this example. Of course, sed is a line-orientated editor.
+
The list can be simply a list of characters as we have seen, but it can also be a range such as 0-9 meaning all the digits from 0 to 9 inclusive. So this is a way of specifying an arbitrary digit such as:
+
s/A[4-6]/An/
+
This will replace 'A4', 'A5' or 'A6' with 'An'.
+
Anchoring at start or end of line
+
The character '^' (circumflex) , when it occurs at the start of a regexp, indicates the start of a line. If it is used anywhere else it indicates the '^' character itself (though we just saw it being used for another purpose in a list).
+
The character '$' (dollar sign), when it occurs at the end of a regexp, indicates the end of a line. If it is used anywhere else it indicates the '$' character itself.
+
If the sequence 'abc' starts at the beginning of the line then use:
+
s/^abc/def/
+
If at the end of the line then this regexp would be needed:
+
s/abc$/def/
+
Replacement in the s command
+
The replacement used in the s command can be more complex than we have seen so far. We will go into more detail with what can be done here later in the series, but for now we'll look at the & character.
+
The & character denotes the whole matched portion of the REGEXP part of the command. If an actual '&' character is required, then it must be escaped.
+
So, to append 'def' to 'abc' the command would be:
+
s/abc/&def/
+
This can be seen as replacing 'abc' with 'abcdef'.
+
If a literal '&' is required then it needs to be escaped with a backslash:
+
s/fruit/apples \& pears/
+
Otherwise undesirable consequences will result:
+
$ echo "Eat your fruit!" | sed -e 's/fruit/apples & pears/'
+Eat your apples fruit pears!
+
Flags and the s command
+
The flag we will examine this time is g. This causes the replacement to be applied to all matches, not just the first. So, for example:
+
s/abc/def/g
+
This means that all instances of the sequence 'abc' will be replaced with 'def' in the current line. Without it, as we saw earlier, just the first instance will be replaced.
+
Using sed commands in a file
+
As we saw in the Options section, sed can take its commands from a file (as well as from the command line). Commands can be formatted one per line, in which case the end of each line separates one command from another. There can be multiple commands per line, in which case they are separated by semicolons.
+
One way of using commands in a file might be the following:
+
$ sed -f - sed_demo1.txt <<END
+s/\./!/g
+s/community/Community/
+END
+
This uses the Bash shell's heredoc feature. This is directly equivalent to using a quoted list of commands:
+
$ sed -e 's/\./!/g
+s/community/Community/' sed_demo1.txt
+
In general it is better to create a sed command file in the way you would create any other text file, such as in an editor. Giving the file an extension of '.sed' will help to remind you what it is.
+
$ cat commands.sed
+s/\./!/g
+s/community/Community/
+$ sed -f commands.sed sed_demo1.txt
+
Examples
+
Example 1
+
$ wc -l sed_demo1.txt | sed -e 's/ .*$//'
+
This is a rather artificial example, as we have already seen, but we know that the wc command returns the number of lines followed by the filename when run in this way:
+
13 sed_demo1.txt
+
This is passed to sed which runs the script s/ .*$//. This replaces the first space and the zero or more characters that follow up to the end of the string by nothing, thereby deleting them. This leaves the number of lines as the final result.
+
Example 2
+
$ sed -e 's/is no/are no/' sed_demo1.txt
+
This fixes the fragment "There is no restrictions" replacing it with "There are no restrictions" in sed_demo1.txt. You will see that the word restrictions is on the next line, so it cannot be included in the regexp.
+
Of course, we cannot just change 'is' to 'are' because there are many uses of this letter sequence throughout the file. That is why we make it more specific by using the regexp 'is no'.
+
We are not permanently changing the file with this command, but you can isolate and display the changes by adding a call to grep in a pipeline as follows:
+
$ sed -e 's/is no/are no/' sed_demo1.txt | grep -A1 "are no"
+produced by the community - fellow listeners like you. There are no
+restrictions on how long the show can be, nor on the topic you can
+
The -A option to grep displays a number of lines after the target line, and the number chosen here is one line.
+
We will look at how sed can alter a file and save the results back to it in a later episode.
This fixes the same fragment as Example 2, but also sorts out the phrase "the topic you can cover". The change is needed because of the use of the word "they" later in the sentence. We include the space in the target regexp because the word "topics" occurs later in the file.
+
We will look at this more in later shows in this series, but a sed script can consist of multiple commands, and these can be separated by semi-colons. So, the following way of writing the earlier command in this example is exactly equivalent:
+
$ sed -e 's/is no/are no/;s/topic /topics /' sed_demo1.txt
+
Example 4
+
$ sed -e 's/Hacker /Hobby /;s/Hackers/Hobbyists/' sed_demo1.txt
+
There is one instance of "Hacker" and one of "Hackers" in the text. We don't want "Hackers" to be turned into "Hobbys", so we differentiate the two instances as shown.
This final example applies the earlier grammatical corrections, replaces a single space after a full-stop with two spaces, and (perversely) turns all spaces into hash marks. This stage uses the g flag to process all spaces.
+
This example shows that each of the commands is applied to each line in turn, and that it is possible to accumulate many commands to make a complex script. We have already seen how scripts can be more conveniently executed from a file, and we will examine this subject more deeply in a forthcoming episode in this series.
+
+
diff --git a/eps/hpr1976/hpr1976_sed_demo1.txt b/eps/hpr1976/hpr1976_sed_demo1.txt
new file mode 100755
index 0000000..4a90192
--- /dev/null
+++ b/eps/hpr1976/hpr1976_sed_demo1.txt
@@ -0,0 +1,13 @@
+Hacker Public Radio (HPR) is an Internet Radio show (podcast) that releases
+shows every weekday Monday through Friday. HPR has a long lineage going back to
+Radio FreeK America, Binary Revolution Radio & Infonomicon, and it is a direct
+continuation of Twatech radio. Please listen to StankDawg's "Introduction to
+HPR" for more information.
+
+What differentiates HPR from other podcasts is that the shows are
+produced by the community - fellow listeners like you. There is no
+restrictions on how long the show can be, nor on the topic you can
+cover as long as they "are of interest to Hackers". If you want to see
+what topics have been covered so far just have a look at our Archive.
+We also allow for a series of shows so that host(s) can go into more
+detail on a topic.
diff --git a/eps/hpr1986/hpr1986_full_shownotes.html b/eps/hpr1986/hpr1986_full_shownotes.html
new file mode 100755
index 0000000..61f9748
--- /dev/null
+++ b/eps/hpr1986/hpr1986_full_shownotes.html
@@ -0,0 +1,366 @@
+
+
+
+
+
+
+
+ Introduction to sed - part 2 (HPR Show 1986)
+
+
+
+
+
+
+
+
+
Introduction to sed - part 2 (HPR Show 1986)
+
Dave Morriss
+
+
+
+
+
+
+
Table of Contents
+
+
+
Introduction
+
In the last episode we looked at sed at the simplest level. We looked at three command-line options and the 's' command. We introduced the idea of basic regular expressions.
+
In this episode we will cover all of these topics in more detail.
+
We are looking at GNU sed in this series. This version contains many extensions to POSIX sed. These extensions provide many more features, but sed scripts written this way are not portable.
+
This episode uses two new data files called sed_demo2.txt and sed_demo3.txt in the various demonstrations and examples.
+
Command line options
+
We looked at the -e and -f options in the last episode. We will look at several more of the available options this time, but will not cover everything. Refer to the GNU manual for the full list.
+
+
-nor--quietor--silent
+
By default, sed prints out the pattern space at the end of each cycle through the script (see "How sed works" in the last episode). These options disable this automatic printing, and sed only produces output when explicitly told to via the 'p' flag or command (see "The p flag" below).
+
+
-i[SUFFIX]or--in-place[=SUFFIX]
+
This option allows sed to edit files in place. If a suffix is specified the original file is renamed by appending the suffix, and the edited file given the original name. This provides a way of creating a backup of the original. If no suffix is given the original file is replaced by the edited file.
+
By default sed treats the input files on the command line as a single stream of data. When the -i option is used the files are treated separately (see the -s option).
+
If the suffix contains a '*' symbol then this is replaced by the current file name. See Example 1 below for how to use this.
+
+
--follow-symlinks
+
This option is relevant to the -i option and is available only on systems that support symbolic links. If specified then, if the file being edited is a symbolic link the link will be followed and the actual file edited. If omitted (the default) the link will be broken and the actual file will not be changed.
+
+
-sor--separate
+
By default sed treats the input files on the command line as a single stream of data. This GNU sed extension causes the command to consider them as separate files. The relevance of this will become apparent in later episodes.
+
+
-ror--regexp-extended
+
By default sed uses basic regular expressions, but this GNU extension allows the use of extended regular expressions (those allowed by egrep). Standard sed uses backslashes to denote many special characters. In extended mode these backslashes are not required. However, the result is not portable.
+
+
+
More about the s command
+
Regular expressions
+
Regular expressions in sed can be more complex than those we looked at in the last episode, allowing much greater flexibility. The new meta-characters we'll look at this time all start with a backslash. Many other Unix tools that use regular expressions do the same, but others do not. This can be confusing, so it's important to be aware of the differences.
+
+
+
+
Expression
+
Meaning
+
+
+
+
+
\+
+
Similar to * but matches a sequence of one or more instances of the preceding item
+
+
+
\?
+
Similar to * but matches a sequence of zero or one instance of the preceding item
+
+
+
\{i\}
+
Matches exactly i sequences (i is a decimal integer)
+
+
+
\{i,j\}
+
Matches between i and j sequences, inclusive
+
+
+
\{i,\}
+
Matches i or more sequences, inclusive
+
+
+
\(regexp\)
+
Groups the inner regexp. Allows it to be followed by a postfix operator, or can be used for back references (see below)
+
+
+
regexp1\|regexp2
+
Matches regexp1 or regexp2, \| is used to separate alternatives
+
+
+
+
+
One or more of the preceding
+
Using the '\+' modifier matches sequences of variable length starting with one instance. So, using an example from the last episode:
+
s/a\+bc/def/
+
Here the sequence being matched is 'abc', 'aabc', 'aaabc' and so forth. It does not batch 'bc' since there has to be at least one 'a'.
+
This is a GNU extension.
+
Zero or one of the preceding
+
The '\?' modifier matches zero or one of the preceding expression. So, considering the following example:
+
s/a\?bc/def/
+
This matches 'bc' and 'abc' because zero or one 'a' is specified.
+
This is a GNU extension.
+
A fixed number of the preceding
+
Using the '\{i\}' modifier we specify a fixed number of the preceding expression:
+
s/a\{3\}bc/def/
+
This only matches 'aaabc' since three 'a' characters are needed.
+
Between i and j of the preceding
+
Using the '\{i,j\}' modifier we specify a number of the preceding expression between lower and upper bounds:
+
s/a\{1,5\}bc/def/
+
This matches 'abc', 'aabc', 'aaabc', 'aaaabc' and 'aaaaabc'; that is, between 1 and 5 'a' characters followed by 'bc'.
+
From i or more of the preceding
+
Using the '\{i,\}' modifier we specify a number of the preceding expression from a lower value to an undefined upper limit:
+
s/a\{1,\}bc/def/
+
This matches 'abc', 'aabc' and so on, with no limit to the number of 'a' characters. This is the same as:
+
s/a\+bc/def/
+
However, the lower limit does not have to be 1.
+
Grouping a regexp
+
So far the modifiers we have seen have been applied to single characters. However, with grouping we can apply them to a more complex expression. The group is enclosed in \( and \). For example:
+
s/\(abc\)*def/ghi/
+
Here the complete regex matches 'def', 'abcdef', 'abcabcdef' and so forth with multiple instances of 'abc'.
+
Each group is numbered by sed simply by counting \( occurrences. This allows references to be made to these sub-expressions as we will see shortly.
+
Alternative regexps
+
It is possible to build a regexp with alternative sub-expressions separated by the characters \|. For example, say the intention is to match either 'Hello World' or 'Goodbye World' without an exclamation mark at the end and add one, the following might be tried as a first attempt:
+
$ echo "Hello World" | sed -e 's/Hello\|Goodbye World/&!/'
+Hello! World
+$ echo "Goodbye World" | sed -e 's/Hello\|Goodbye World/&!/'
+Goodbye World!
+
Those results might be unexpected. What has happened is that sed has just matched the 'Hello' in the first case, and so the replacement '&!' has resulted in an exclamation mark being placed after this word. However, it has matched 'Goodbye World' in the second case so the exclamation mark has been placed as we expected.
+
To match either 'Hello' or 'Goodbye' we need grouping:
The way that sed matches a regexp is sometimes a little unexpected. This because of what is referred to as "greediness", where more is matched than might be predicted.
+
The following is taken from the GNU manual:
+
Note that the regular expression matcher is greedy, i.e., matches are attempted from left to right and, if two or more matches are possible starting at the same character, it selects the longest.
+
For example, say we are trying to process the example file for this episode sed_demo2.txt, looking for a word starting with capital 'H' at the start of a line. It would be tempting to use a regexp such as '^H.\+ ' meaning a line starting with capital 'H' up to a space. In the example below we enclose what was matched by square brackets, printing out only the lines that matched (see the sections entitled "Command line options" for the '-n' option and "The p flag" below):
+
$ sed -ne 's/^H.\+ /[&]/p' sed_demo2.txt
+[Hacker Public Radio (HPR) is an Internet Radio show (podcast) that ]releases
+[HPR" for more ]information.
+[Hacker Public Radio is dedicated to sharing knowledge. We do ]not
+
The regexp matcher has matched everything from the leading 'H' to the last space on the line.
+
One technique for limiting this behaviour is shown below:
+
$ sed -ne 's/^H[^ ]\+ /[&]/p' sed_demo2.txt
+[Hacker ]Public Radio (HPR) is an Internet Radio show (podcast) that releases
+[HPR" ]for more information.
+[Hacker ]Public Radio is dedicated to sharing knowledge. We do not
+
Here, rather than following the 'H' with a dot (any character) we use a list in square brackets. The list is negated by using a circumflex, so it means "not space". So, here we are looking for a capital 'H' at the start of a line followed by one or more "not spaces" then a space. Notice how this has constrained the greediness.
+
Replacement
+
Last time we saw the use of & meaning the whole of the line which matched the REGEXP part of the command.
+
Back references
+
As we saw earlier, there is also a way of referring to a matching group. We use \n where n is a number between 1 and 9 which refers to the nth group between \( and \) delimiters (as discussed above under "Grouping a regexp").
+
For example:
+
$ echo "Hacker Public Radio" | sed -e 's/\(.\+\) \(.\+\) \(.\+\)/\3 \2 \1/'
+Radio Public Hacker
+
Here we look for three groups of characters separated by a single space and we group each one. We then replace them in the order 3, 2, 1, resulting in the words being printed in reverse order.
+
Interestingly, these back references can be used inside the regexp itself:
+
$ echo "Run Lola Run" | sed -e 's/\(.\+\) \(.\+\) \1/\2 \1 \1/'
+Lola Run Run
+
Here the first group matches the first "Run", and we use it as the last element of the regexp. We could have made it a group:
+
$ echo "Run Lola Run" | sed -e 's/\(.\+\) \(.\+\) \(\1\)/\2 \3 \1/'
+Lola Run Run
+
There is no point in doing this since the result is the same yet it makes sed work harder.
+
Case manipulation
+
GNU sed provides a means of changing the case of the replacement text using the sequences \L, \l, \U, \u and \E.
+
+
\L
+
Turn the replacement to lowercase until a \U or \E is found,
+
+
\l
+
Turn the next character to lowercase,
+
+
\U
+
Turn the replacement to uppercase until a \L or \E is found,
+
+
\u
+
Turn the next character to uppercase,
+
+
\E
+
Stop case conversion started by \L or \U.
+
+
+
When used in conjunction with grouping the following results may be obtained (from Ken's script for the Community News perhaps):
+
$ echo "Hacker Public Radio" |\
+ sed -e 's/\(.\+\) \(.\+\) \(.\+\)/\U\1 \L\1 \U\2 \L\2 \U\3 \L\3/'
+HACKER hacker PUBLIC public RADIO radio
+
Flags
+
We saw the 'g' flag in the last episode, which makes the substitution repeat for each line applying to all matches. We will look at some other flags in this episode, but some of the more advanced features will be omitted here.
+
The number flag
+
There is also a number flag which only applies the numberth match. For example:
Here the match is for 'ny', and the replacement is the matching text forced to upper case (see "Case manipulation" above). However, we restrict the substitution to just the second match, as you can see from the result.
+
The p flag
+
This causes the result of the substitution to be printed. More precisely, it causes the pattern space to be printed if the substitution was made.
+
Normally this happens anyway, but when the -n command line option has been selected (see "Command line options") nothing is printed unless the script explicitly requests it.
+
$ sed -n -e 's/Hacker /Hobby /p' sed_demo2.txt
+Hobby Public Radio (HPR) is an Internet Radio show (podcast) that releases
+Hobby Public Radio is dedicated to sharing knowledge. We do not
+
Only the lines where 'Hacker ' was replaced by 'Hobby ' are reported.
+
The I and i flags
+
These flags are a GNU sed extension. They cause the regexp to be case-insensitive. Both forms of this flag have the same meaning.
+
$ sed -n -e 's/hacker /Hobby /ip' sed_demo2.txt
+Hobby Public Radio (HPR) is an Internet Radio show (podcast) that releases
+Hobby Public Radio is dedicated to sharing knowledge. We do not
+
GNU Extensions for Escapes in Regular Expressions
+
GNU sed contains a way of referencing (or producing) special characters. These are documented in the GNU Manual (under the same title as this section). We will not look at all of these in this series, but will touch on some of the more generally useful ones.
+
+
\n
+
Produces or matches a newline (ASCII 10).
+
+
\t
+
Produces or matches a horizontal tab (ASCII 9).
+
+
+
There are also escapes which match a particular character class which are valid only in regular expressions. These are mentioned here because they can be very useful, as we will see in the examples:
+
+
\w
+
Matches any word character. A word character is any letter or digit or the underscore character.
+
+
\W
+
Matches any non-word character.
+
+
\b
+
Matches a word boundary; that is it matches if the character to the left is a word character and the character to the right is a non-word character, or vice-versa.
+
+
\<\>
+
(These are not very clear in the sed documentation but are available). These are alternative ways of denoting word boundaries, with \< being used for the left boundary and \> for the right.
+
+
\B
+
Matches everywhere but on a word boundary; that is it matches if the character to the left and the character to the right are either both word characters or both non-word characters.
+
+
+
Examples
+
Example 1
+
This example shows the use of the -i option:
+
$ for f in {A..C}; do echo $RANDOM > $f; done
+$ sed -i'saved_*.sav' -e 's/4/@/g' {A..C}
+$ cat {A..C}
+1@855
+2@593
+@217
+$ cat saved_{A..C}.sav
+14855
+24593
+4217
+
The first line generates three files called A, B and C using brace expansion in a for loop. Each file contains a random number. The second line runs sed against these files replacing any instance of the digit 4 by an '@' symbol. The third line shows the contents of these three files. Backups of their original contents are held in files called saved_A.sav, saved_B.sav and saved_C.sav. Their contents are shown by the final cat command.
+
Example 2
+
The second example file sed_demo3.txt contains statistics pulled from the HPR website. Imagine that we are writing a Bash script to parse this, and we want the number of days to the next free slot in a variable. The line in question looks like this:
+
Days to next free slot: 8
+
There are two lines beginning with the word 'Days' so we have to be careful:
The regexp starts with '^Days to' which makes it match the target line. After this come some other words and a colon. We'll represent this with '[^:]\+:' meaning one or more "not colons" followed by a colon. Then there are what look like spaces or could be a tab character (Hint: it's actually a tab). For safety's sake we'll represent this as '[\t ]\+' meaning one or more of tab or space. Then we have a regexp group consisting of '[0-9]\+' meaning one or more digits.
+
If this matches then we'll have a back reference to the group which we can return -- 8 in this case. The overall sed command uses the '-n' option suppressing printing and the 's' command uses the 'p' flag to print just the matched line.
+
The output from the sed command is returned in a command substitution and is used to set the variable DTNFS. This is echoed in this fragment to show what was returned.
+
It is possible that the sed command could return nothing, in which case the variable would not be set. An actual Bash script doing this should check for this eventuality and take appropriate action.
+
Example 3
+
In this example we use the '\n' escape we examined earlier (backslash 'n' meaning newline):
+
$ sed -e 's/\(Hacker\) \(Public\) \(Radio\) /\1\n\2\n\3\n/' sed_demo2.txt | head -4
+Hacker
+Public
+Radio
+(HPR) is an Internet Radio show (podcast) that releases
+
We simply looked for the words "Hacker Public Radio", grouping each of them so that they could be back referenced, and output them each followed by a newline. We used the head command to view just the first 4 lines produced by this sed command.
+
You might have expected that the following would join all the lines of the file together, but that doesn't happen:
+
$ sed -e 's/\n//' sed_demo2.txt
+
That is because sed places one line at a time into the pattern space, removing the trailing newline. Then it applies the script to it and (unless the '-n' option was used) prints it out with a trailing newline.
+
We will look at ways in which actions like line concatenation can be achieved in a later episode.
+
Example 4
+
We saw the '-r' (--regexp_extended) option earlier in this episode. If we were to use this in conjunction with Example 3 we would write the following:
+
$ sed -r -e 's/(Hacker) (Public) (Radio) /\1\n\2\n\3\n/' sed_demo2.txt | head -4
+Hacker
+Public
+Radio
+(HPR) is an Internet Radio show (podcast) that releases
+
This is a useful feature, but it needs to be used with caution because it is specific to GNU sed and not portable.
+
Example 5
+
One task often needed when processing text is to remove leading and trailing spaces. With sed you might expect the following would work:
In the audio I said that I would be demonstrating the use of word boundaries in an example. I had forgotten to add it at the time of recording, so this one is not described in the podcast.
+
Really, this is a piece of extreme silliness, but it does demonstrate word boundaries. It is being run on the example file from the last episode.
+
$ sed -e 's/\<[A-Z]\w*\>/Chicken/g;s/\b[a-z]\w*\b/chicken/g' sed_demo1.txt
+
The example consists of two 's' commands separated by a semicolon. The first matches any word that begins with a capital letter, using the \< and \> word boundaries and the \w expression. It replaces each occurrence it finds with an alternative capitalised word, using the 'g' flag to ensure this happens.
+
The second 's' command does the same for lower-case words but uses the \b word boundary instead.
+
+
diff --git a/eps/hpr1986/hpr1986_sed_demo2.txt b/eps/hpr1986/hpr1986_sed_demo2.txt
new file mode 100755
index 0000000..f918de1
--- /dev/null
+++ b/eps/hpr1986/hpr1986_sed_demo2.txt
@@ -0,0 +1,26 @@
+Hacker Public Radio (HPR) is an Internet Radio show (podcast) that releases
+shows every weekday Monday through Friday. HPR has a long lineage going back to
+Radio FreeK America, Binary Revolution Radio & Infonomicon, and it is a direct
+continuation of Twatech radio. Please listen to StankDawg's "Introduction to
+HPR" for more information.
+
+What differentiates HPR from other podcasts is that the shows are
+produced by the community - fellow listeners like you. There is no
+restrictions on how long the show can be, nor on the topic you can
+cover as long as they "are of interest to Hackers". If you want to see
+what topics have been covered so far just have a look at our Archive.
+We also allow for a series of shows so that host(s) can go into more
+detail on a topic.
+
+You can download/listen to the show here or you can subscribe to the
+show in your favorite podcatching client (like BashPodder) to
+automatically get our new shows as soon as they are available. You can
+copy and redistribute the shows for free provided you adhere to the
+Creative Commons AttributionShareAlike 3.0 License.
+
+We do not filter the shows in any way other than to check if they are
+audible and not blatant attempts at spam.
+
+Hacker Public Radio is dedicated to sharing knowledge. We do not
+accept donations, but if you listen to HPR, then we would love you to
+contribute one show a year.
diff --git a/eps/hpr1986/hpr1986_sed_demo3.txt b/eps/hpr1986/hpr1986_sed_demo3.txt
new file mode 100755
index 0000000..469ea3c
--- /dev/null
+++ b/eps/hpr1986/hpr1986_sed_demo3.txt
@@ -0,0 +1,14 @@
+Started: 10 years, 4 months, 16 days ago (2005-10-10)
+Renamed HPR: 8 years, 1 months, 24 days ago (2007-12-31)
+Total Shows: 2561
+Total TWAT: 300
+Total HPR: 2261
+HPR Hosts: 256
+Days to next free slot: 8
+Hosts in Queue: 8
+Shows in Queue: 25
+Comments waiting approval: 0
+Files on the FTP Server: 5
+Number of Emergency Shows: 7
+Days until show without media: 0
+1456092301,327189901,257033101,2561,300,2261,256,8,8,25,0,5,7,0
diff --git a/eps/hpr1997/hpr1997_demo2.sed b/eps/hpr1997/hpr1997_demo2.sed
new file mode 100755
index 0000000..ac35333
--- /dev/null
+++ b/eps/hpr1997/hpr1997_demo2.sed
@@ -0,0 +1,15 @@
+/a.\{1,5\}b.\{1,5\}c/{
+ s/^/G1: /
+ s/a/[a]/g
+ s/b/[b]/g
+ s/c/[c]/g
+ p
+ n
+}
+/a.\{1,5\}b/{
+ s/^/G2: /
+ s/a/[a]/g
+ s/b/[b]/g
+ p
+ n
+}
diff --git a/eps/hpr1997/hpr1997_example_6.sed b/eps/hpr1997/hpr1997_example_6.sed
new file mode 100755
index 0000000..993a95b
--- /dev/null
+++ b/eps/hpr1997/hpr1997_example_6.sed
@@ -0,0 +1,6 @@
+/^ *$/!{
+ s/is no/are no/
+ s/topic\b/topics/
+ s/"are of/are "of/
+ s/(like/(such as/
+}
diff --git a/eps/hpr1997/hpr1997_example_7.sed b/eps/hpr1997/hpr1997_example_7.sed
new file mode 100755
index 0000000..f8e4670
--- /dev/null
+++ b/eps/hpr1997/hpr1997_example_7.sed
@@ -0,0 +1,6 @@
+#!/bin/sed -nf
+/^.\{75,80\}$/{
+ s/$/ /
+ s/^\(.\{80\}\).*/|\1|/
+ p
+}
diff --git a/eps/hpr1997/hpr1997_full_shownotes.html b/eps/hpr1997/hpr1997_full_shownotes.html
new file mode 100755
index 0000000..e5a3363
--- /dev/null
+++ b/eps/hpr1997/hpr1997_full_shownotes.html
@@ -0,0 +1,369 @@
+
+
+
+
+
+
+
+ Introduction to sed - part 3 (HPR Show 1997)
+
+
+
+
+
+
+
+
+
Introduction to sed - part 3 (HPR Show 1997)
+
Dave Morriss
+
+
+
+
+
+
+
Table of Contents
+
+
+
Introduction
+
In the last episode we looked at sed at a more advanced level. We looked at all of the command-line options which we will cover in this series and examined the s command in much more detail. We covered many more details of regular expressions.
+
In this episode we will look at more sed commands and how to use them.
+
Commands
+
So far we have concentrated on the s command. There are many more commands within sed. Most commands operate on lines or ranges of lines. We will look first at how such line addressing is achieved.
+
Selecting lines
+
The following table summarises the available addressing methods for reference; longer explanations are then given.
+
+
+
+
Address
+
Explanation
+
+
+
+
+
number
+
Matches the numbered line in the input
+
+
+
first~step
+
Matches every stepth line starting with line first
+
+
+
$
+
Matches the last line of the input
+
+
+
/regexp/
+
Selects any line matching the regexp
+
+
+
addr1,addr2
+
Selects lines from the first to the second address inclusively
+
+
+
+
We will look at these addresses in detail, but to give examples as we go we need to look ahead at one of the commands we'll be examining in more detail later. We'll use the p command (as opposed to the flag we saw in the last episode). This just prints the line or lines we have addressed. This only makes sense if the -n option is used to prevent the auto-printing of the non-matching lines.
+
Selecting a line by number
+
This form of address justs consists of a number, and matches the line with that number in the input stream. So, to print just the first line of a file the following would suffice:
+
$ sed -ne '1p' sed_demo2.txt
+
Remember that normally sed treats all of the input files as one continuous stream, so that line number will match just once.
+
If sed is run with either of the -i or the -s options, multiple input files are treated separately. In this example, there will be two instances of line number 5:
+
$ sed -sne '5p' sed_demo1.txt sed_demo2.txt
+HPR" for more information.
+HPR" for more information.
+
Selecting every nth line starting at a number
+
This is a GNU extension which allows the addressing method to specify a starting line number and the size of the step to the next line number. Lines are selected by adding the starting point to n times the size of the step.
+
So, 1~2 means line 1, line 1+1*2=3, line 1+2*2=5, and so on for every odd numbered line.
+
Specifying 2~3 means line 2, line 2+1*3=5, line 2+2*3=8, and so on for every third line.
+
There is an example of using this addressing form with one of the demonstration files below in Example 2.
+
Selecting the last line of the file
+
The '$' symbol as an address matches the last line of the file, or more accurately, the last line of the stream of data read by sed, which when presented with multiple files means the last line of the last file.
+
As with the discussion for single line addressing, if sed is run with either of the -i or the -s options, multiple input files are treated separately and every file will have a last line.
+
See Example 3 below for a demonstration of this type of addressing.
+
Selecting by regular expression
+
An address of the form '/regexp/' is a regexp which matches various lines in the input stream.
+
$ sed -ne '/HPR/p' sed_demo1.txt
+Hacker Public Radio (HPR) is an Internet Radio show (podcast) that releases
+shows every weekday Monday through Friday. HPR has a long lineage going back to
+HPR" for more information.
+What differentiates HPR from other podcasts is that the shows are
+
Alternative delimiters
+
Normally the delimiter for a regexp is the '/' character, and we have used this exclusively throughout the series so far. If the regexp needs to contain this delimiter then it needs to be preceded by a backslash.
+
However, it is possible to use alternative delimiters, which is useful in this type of circumstance. The first instance of the alternative delimiter must be preceded by a backslash.
+
The following regexp examples have the same effect:
+
/etc\/passwd/
+\#etc/passwd#
+
This is particularly useful when the regexp contains multiple slashes which all need to be escaped.
+
Note: In the case of the s command it is not necessary to precede the first alternative delimiter with a backslash. Indeed, if a backslash is used, an error is produced, and a backslash seems to be a valid delimiter! This does not seem to be documented, but it is presumably because in the s command the 's' is expected to be followed by a delimiter, whereas a regexp delimiter is harder for a parser to recognise. The following examples all work except the one followed by an error message:
+
$ sed -ne 's|HPR|Banana|p' sed_demo1.txt
+
+$ sed -ne 's\|HPR|Banana|p' sed_demo1.txt
+sed: -e expression #1, char 15: unterminated `s' command
+
+$ sed -ne 's\HPR\Banana\p' sed_demo1.txt
+
+$ sed -ne 'spHPRpBananapp' sed_demo1.txt
+
Empty regular expressions
+
Another feature of regular expressions we have not looked at before is the case where the regexp in an address or an s command is empty. This is sed's way of representing the last regular expression that matched.
+
The following example uses this feature in three s commands. The first changes the first space on each line to an asterisk, the second changes the second space to an underscore and the third changes the third space to a plus sign. The second and third s commands have empty regexps so they use the previous matching one. The example shows the effect on the first line of the file:
+
$ sed -e 's/ /*/;s//_/;s//+/' sed_demo1.txt
+Hacker*Public_Radio+(HPR) is an Internet Radio show (podcast) that releases
+
The GNU Manual warns that the use of modifiers in these cases can be problematic:
+
Note that modifiers to regular expressions are evaluated when the regular
+expression is compiled, thus it is invalid to specify them together with
+the empty regular expression.
+
Modifiers
+
There are two modifiers available which change the way in which regular expressions in addresses behave. These are both GNU extensions.
+
We have already seen the I and i flags in the context of the s command which make the regexp case insensitive. There is also an I modifier for address regexps, though there is no i equivalent. This modifier has the same effect:
+
$ sed -ne '/hpr/Ip' sed_demo1.txt
+Hacker Public Radio (HPR) is an Internet Radio show (podcast) that releases
+shows every weekday Monday through Friday. HPR has a long lineage going back to
+HPR" for more information.
+What differentiates HPR from other podcasts is that the shows are
+
The second modifier is M which affects text in the pattern space containing multiple newlines. We will not be looking at this in detail in this episode, but will examine it in the next.
+
Selecting an address range
+
The address range allows a sed script to match the lines in the input data from a starting position to (and including) an ending position. The range is written as two addresses (of the types we have seen so far) separated by a comma:
+
$ sed -ne '1,3p' sed_demo1.txt
+Hacker Public Radio (HPR) is an Internet Radio show (podcast) that releases
+shows every weekday Monday through Friday. HPR has a long lineage going back to
+Radio FreeK America, Binary Revolution Radio & Infonomicon, and it is a direct
+
This simply prints lines 1 to 3. Note that, as before, we used the -n option to prevent automatic printing.
+
$ sed -ne '/^We/,$p' sed_demo1.txt
+We also allow for a series of shows so that host(s) can go into more
+detail on a topic.
+
This example prints from a line beginning with 'We' up to the end of the file (the next line in this case).
+
$ sed -ne '/^What/,/^produced/p' sed_demo1.txt
+What differentiates HPR from other podcasts is that the shows are
+produced by the community - fellow listeners like you. There is no
+
This example prints from a line which begins with 'What' and ends with a line beginning with 'produced' (the next line in fact).
+
There are some GNU sed address range extensions which can be found in the GNU Manual. We will not be looking at these in this series.
+
Negating an address match
+
All of the address types we have seen in this section can be "negated". For example, using a line number and negating it tells sed to match all lines but the selected line. Negation is achieved by adding a '!' character after the address.
+
See Example 1 below for an example of line number negation.
+
The addressing form matching every nth line starting at a specific line can also be negated. So, for example, the following command would print all the odd-numbered lines in this 13-line file:
+
$ sed -ne '2~2!p' sed_demo1.txt
+
Whereas without negation it would print all the even-numbered lines.
+
Negating the '$' (last line of file) means all lies except the last line. Negating a regular expression means all lines that do not match. So, for example, the following example will display all lines that do not contain a capital letter:
+
$ sed -ne '/[A-Z]/!p' sed_demo1.txt
+
This next example matches the same lines, but rather than just printing them it replaces every first letter of a word with the capital equivalent:
+
$ sed -ne '/[A-Z]/!s/\b\w/\u&/gp' sed_demo1.txt
+Restrictions On How Long The Show Can Be, Nor On The Topic You Can
+Detail On A Topic.
+
This emphasises how addresses can be associated with many of the commands that sed uses. Note that the 'p' used here is the flag to the s command, and, as such it will only print lines on which substitution has taken place.
+
If negation is used with an address range, then it applies to the range. It is not possible to negate the individual addresses in the range. The effect is to match all lines outside the range. So, the following example, instead of matching the two lines in the file as un-negated form did, will match the rest of the file:
+
$ sed -ne '/^What/,/^produced/!p' sed_demo1.txt
+
Comments in scripts
+
It is possible to add comments to a sed script. This makes most sense when the sed commands are in a file. Just like in many scripting and programming languages the '#' character begins a comment, and the comment continues to the end of the line (to the newline).
+
As with many other scripting languages running under Unix or Linux, if the command file begins with a specially formatted comment line and the file is made executable, the file may be directly invoked from the command line:
+
$ cat > demo.sed
+#!/bin/sed -f
+# Be 1337
+s/Hacker/H4x0r/g
+CTRL-D
+$ chmod u+x demo.sed
+$ ./demo.sed sed_demo1.txt
+H4x0r Public Radio (HPR) is an Internet Radio show (podcast) that releases
+
In this example the cat command is used to redirect what is typed on STDIN into a file. The end of the data on STDIN is signalled by pressing CTRL-D. The file contains the comment which makes it a sed script, and another indicating what it does, followed by a single s command. The chmod command makes the resulting file executable, and then it is invoked to process the file sed_demo1.txt. (Note, only the first line of output is shown here.)
+
The Quit command
+
This command, which consists of a lower-case q, causes sed to exit. It can be preceded by a single address meaning "exit when this line is reached". In GNU sed the command can be followed by an exit code.
+
The current pattern space is printed unless the -n option was selected.
+
For example, as a variant of the last example, the following script would edit the first three lines then quit:
+
$ sed -ne 's/Radio/R4d10/gp;3q' sed_demo1.txt
+Hacker Public R4d10 (HPR) is an Internet R4d10 show (podcast) that releases
+R4d10 FreeK America, Binary Revolution R4d10 & Infonomicon, and it is a direct
+
Of course, the same effect can be achieved by adding an address range to the s command:
+
$ sed -ne '1,3s/Radio/R4d10/gp' sed_demo1.txt
+
In general, the use of q to quit has the advantage that it stops processing and exits. If sed was reading a very large file and the work it was asked to do was completed relatively early in the file, stopping it from reading the rest of the file might be advantageous in terms of speed.
+
Delete the pattern space
+
This command, which consists of a lower-case d, deletes the pattern space and causes sed to start the next cycle by reading the next line.
+
The command may be preceded by any of the various types of addresses. The effect it has is to omit the lines in question from the output stream.
+
For example, to omit (delete) all lines beginning with 'H' the following command would suffice:
+
$ sed -e '/^H/d' sed_demo1.txt
+
Alternatively, to delete all lines that do not begin with 'H':
+
$ sed -e '/^H/!d' sed_demo1.txt
+
Note how we have negated the address match here.
+
Print the pattern space
+
This command is equivalent to the p flag used with the s command, but is stand-alone. It consists of a lower-case p which may be preceded by any of the various types of addresses.
+
The command is only useful with the -n option to the sed command. Without this option it just prints the relevant line(s) again.
+
For example, to print lines 1-5 of a file:
+
$ sed -ne '1,5p' sed_demo1.txt
+
This is equivalent to the command: head -5 sed_demo1.txt
+
Print and get the next line
+
The n command is mainly relevant to more complex scripts. It is rare to see examples of its use in the simpler scripts we have seen so far. We will start looking at these and other less commonly used sed commands in the next episode.
If auto-print is not disabled, print the pattern space, then, regardless,
+replace the pattern space with the next line of input. If there is no more
+input then sed exits without processing any more commands.
+
An example of using the n command is in the next section.
+
Grouping commands
+
If it is necessary to perform several sed commands on a given input line or set of lines, for example, then there needs to be a means of grouping them together. This can be achieved by enclosing the commands between '{' and '}' characters.
+
The following example shows the same address range we have used before, but for every matching line a group of commands is performed. An s command adds a greater than sign and a tab at the start of the line, then a p command prints the result:
+
$ sed -ne '/^What/,/^produced/{s/^/>\t/;p}' sed_demo1.txt
+> What differentiates HPR from other podcasts is that the shows are
+> produced by the community - fellow listeners like you. There is no
+
The next example is a fairly useless demonstration. It shows a command file with two groups associated with regular expression addresses. The first regexp matches a line that contains the letter 'a' followed by 'b' within 5 characters and 'c' within 5 characters. The second is similar but matches only 'a' and 'b'.
+
The first group uses four s commands to mark the line with 'G1:' to show it was processed by group 1 and to highlight all the 'abc' characters, print the result and move to the next line. The second group does the same but with 'G2:' and for 'a' and 'b'.
+
/a.\{1,5\}b.\{1,5\}c/{
+ s/^/G1: /
+ s/a/[a]/g
+ s/b/[b]/g
+ s/c/[c]/g
+ p
+ n
+}
+/a.\{1,5\}b/{
+ s/^/G2: /
+ s/a/[a]/g
+ s/b/[b]/g
+ p
+ n
+}
+
The n commands here ensure that the same line is not processed by both of the groups.
+
Running this, assuming the commands are in the file demo2.sed, we get:
+
$ sed -nf demo2.sed sed_demo1.txt
+G2: restrictions on how long the show c[a]n [b]e, nor on the topic you c[a]n
+G1: wh[a]t topi[c]s h[a]ve [b]een [c]overed so f[a]r just h[a]ve [a] look [a]t our Ar[c]hive.
+
The command file is included so you can experiment with it if you want.
+
Examples
+
Example 1
+
In this example there are two ways of printing all but line 1 of a file:
+
$ sed -ne '1!p' sed_demo1.txt
+
+$ sed -ne '2,$p' sed_demo1.txt
+
In the first case the address is '1!' meaning all lines but line number 1.
+
The alternative way is to specify an address of '2,$', meaning line 2 to the end of the file.
+
Example 2
+
This time we use the first~step form of addressing:
+
$ nl -w3 -ba sed_demo1.txt | sed -ne '1~5p'
+ 1 Hacker Public Radio (HPR) is an Internet Radio show (podcast) that releases
+ 6
+ 11 what topics have been covered so far just have a look at our Archive.
+
The nl command numbers lines in a file. The -w3 option sets the width of these numbers to 3 columns, and -ba requests that even blank lines be numbered (the default is not to do this). We then feed the numbered file to sed and ask for the first line and every 5th line after to be printed.
+
Note that if we used '1~5!p', negating the addressing we would see all lines except 1, 6 and 11.
+
Example 3
+
Here we demonstrate the $ address - last line of input:
+
$ sed -ne '$p' sed_demo1.txt sed_demo2.txt
+contribute one show a year.
+
+$ sed -sne '$p' sed_demo1.txt sed_demo2.txt
+detail on a topic.
+contribute one show a year.
+
In the first case we just get the last line of the second file because sed sees the two files as continuous stream.
+
In the second case, on the other hand, because the -s option has been included, sed sees each file as separate so we get the last line of each.
+
Example 4
+
The regexp form of addressing is demonstrated here:
+
$ sed -ne '/long/p' sed_demo1.txt
+shows every weekday Monday through Friday. HPR has a long lineage going back to
+restrictions on how long the show can be, nor on the topic you can
+cover as long as they "are of interest to Hackers". If you want to see
+
The regexp 'long' is enclosed in (forward) slash characters as we have seen in all of the examples so far. There will be times when it is more convenient to change the delimiter, and as we have seen, preceding the alternative character with a backslash (\) is required:
+
$ sed -ne '\#long#p' sed_demo1.txt
+
Whatever is used as a delimiter, if it occurs in the regexp it needs to be preceded by a backslash.
+
Example 5
+
The address range can use any of the address we have discussed. A start and end address are separated by a comma:
+
$ sed -ne '1,/hpr/Ip' sed_demo1.txt
+Hacker Public Radio (HPR) is an Internet Radio show (podcast) that releases
+shows every weekday Monday through Friday. HPR has a long lineage going back to
+
Here we have started at line number 1 and continued to the next line containing 'HPR'. We specified it in lower case but qualified it with a capital 'I' to signal case-insensitivity.
+
$ sed -ne '/^what/I,/^PRODUCED/Ip' sed_demo1.txt
+What differentiates HPR from other podcasts is that the shows are
+produced by the community - fellow listeners like you. There is no
+what topics have been covered so far just have a look at our Archive.
+We also allow for a series of shows so that host(s) can go into more
+detail on a topic.
+
This example indicates that the I modifier can be applied to both regular expressions in an address range. Note how the '^what' regexp matches twice and '^PRODUCED' matches once. So, the first two lines are from the first match and the last three are from the second match.
+
The following way of doing the same thing might make this clearer:
+
$ awk '{printf "%-75s (%d)\n",$0,NR}' sed_demo1.txt | sed -ne '/^what/I,/^PRODUCED/Ip'
+What differentiates HPR from other podcasts is that the shows are (7)
+produced by the community - fellow listeners like you. There is no (8)
+what topics have been covered so far just have a look at our Archive. (11)
+We also allow for a series of shows so that host(s) can go into more (12)
+detail on a topic. (13)
+
Here an awk command is used to add line numbers to the ends of the lines (where they are not in the way), before passing them to sed. This lets you see that lines 7 and 8 were part of the first address range and 11-13 were the second range. The second range did not find an instance of '^PRODUCED' in any form by the time the last line (13) was reached.
+
Example 6
+
This example is a further development of the various corrections we applied to the file sed_demo2.txt in the earlier episodes.
The file contains a single group of commands controlled by an address using a regular expression. The regexp matches all lines in the file which are not blank (contain nothing but zero or more spaces).
+
The contents of the group are all s commands which modify the various grammatical errors in the text we have already seen and a few others.
The script consists of a single regexp address which controls a group of three commands. The regexp matches any line which contains between 75 and 80 characters. The first member of the group is an s command that adds 5 spaces to the end of each line to make sure each line is at least 80 characters long. The second s command matches the first 80 characters and replaces them with the captured characters preceded and followed by a vertical bar. Characters beyond 80 are discarded. The third command is a p which prints the edited line. The initial comment line starting with the '#!' characters sets the -n and -f options ensuring that only matching lines will be printed.
+
The created file needs to made executable and then it can be invoked as shown below:
+
$ chmod u+x example_7.sed
+$ ./example_7.sed sed_demo1.txt
+|Hacker Public Radio (HPR) is an Internet Radio show (podcast) that releases |
+|shows every weekday Monday through Friday. HPR has a long lineage going back to |
+|Radio FreeK America, Binary Revolution Radio & Infonomicon, and it is a direct |
+|continuation of Twatech radio. Please listen to StankDawg's "Introduction to |
+
This executable sed script is available on the HPR site.
+
+
diff --git a/eps/hpr2007/hpr2007_ASUS_keyboard.png b/eps/hpr2007/hpr2007_ASUS_keyboard.png
new file mode 100755
index 0000000..9ddb80a
Binary files /dev/null and b/eps/hpr2007/hpr2007_ASUS_keyboard.png differ
diff --git a/eps/hpr2007/hpr2007_HP_keyboard.png b/eps/hpr2007/hpr2007_HP_keyboard.png
new file mode 100755
index 0000000..9446235
Binary files /dev/null and b/eps/hpr2007/hpr2007_HP_keyboard.png differ
diff --git a/eps/hpr2007/hpr2007_Kratos_keyboard.png b/eps/hpr2007/hpr2007_Kratos_keyboard.png
new file mode 100755
index 0000000..fe44faf
Binary files /dev/null and b/eps/hpr2007/hpr2007_Kratos_keyboard.png differ
diff --git a/eps/hpr2007/hpr2007_full_shownotes.html b/eps/hpr2007/hpr2007_full_shownotes.html
new file mode 100755
index 0000000..a658c31
--- /dev/null
+++ b/eps/hpr2007/hpr2007_full_shownotes.html
@@ -0,0 +1,94 @@
+
+
+
+
+
+
+
+ My new laptop (HPR Show )
+
+
+
+
+
+
+
+
+
My new laptop (HPR Show )
+
Dave Morriss
+
+
+
+
+
+
+
Table of Contents
+
+
+
The OggCamp Raffle
+
I attended OggCamp15 in Liverpool at the end of October 2015. As usual I bought some raffle tickets as a contribution to the expenses of the (un-)conference, not paying much attention to the prizes.
+
Actually, the star prize was a laptop donated by Entroware, a significant sponsor of the event, one of the most impressive prizes ever offered at OggCamp. There was quite a lot of excitement about this prize.
+
I attended the drawing of the raffle at the end of proceedings on the Sunday. Dan Lynch (of Linux Outlaws, and a frequent organiser of OggCamp) was in attendance overseeing the selection of the raffle tickets. Various smaller prizes were won and the tension built up as the final drawing approached.
+
Things got very tense when the first number drawn for the laptop was called and nobody responded. Then another draw was made.
+
Imagine my shock and surprise when I realised I had the winning ticket! I had won the star prize in the OggCamp raffle!
+
The laptop I won
+
Entroware Ltd are a Liverpool-based company who sell Ubuntu Linux computers.
+
The model is the Kratos, an attractive dark grey laptop with a brushed metal-looking case. It is listed on the Entroware website but the specifications have changed a little from the model I have.
+
I was also interviewed about this laptop by @kevie and @mcnalu on the TuxJam podcast. One of their questions was whether I'd recorded an HPR show about it, and if not why not. That prompted me to put together this episode, so thanks to the guys for the suggestion!
+
The specifications
+
+
The Kratos model I won has a core i3-4100M 2.50GHz processor with 8GB DDR3 memory and a 120GB SSD. It has a 1080p 15.6 inch screen.
+
Sound is handled by a Sound Blaster sound system and graphics by a NVIDIA GEForce GTX 950M graphics card. There are on-board graphics which are switchable. I found that by default it used the on-board graphics for economy but could be switched to the NVIDIA card through the NVIDIA application.
+
For networking it has Intel AC-8260 wireless, Bluetooth and Gigabit Ethernet
+
Externally it has a camera, a VGA connector, Ethernet port, 1 USB3.0/e-SATA combo port, 1 USB 2.0 and 2 USB 3.0, HDMI connector, multi-function card slot, DVD rewriter, 2 audio jacks and a lock point
+
It came installed with Ubuntu 15.10 (Wily Werewolf)
+
+
Impressions
+
I have never bought myself a new laptop. I have a Netbook, an ASUS eeePC, which I use mainly for its portability, but I don't count it as a full laptop. I also have another i3 system, a laptop my daughter owned then passed on to me when she wanted to upgrade. It's an HP G62 Notebook model, about 5 or 6 years old, which became more or less unusable for Windows and was always overheating. After stripping it down and giving it a good clean, a new battery and installing Linux, it's been OK for occasional use, though it still gets pretty hot. It normally runs on top of a laptop cooler for that reason.
+
The Entroware Kratos on the other hand seems like a well built machine and the fact that it is tailored for running Linux is a great advantage. Of course, that SSD makes a huge difference compared to what I have been used to.
+
The fact that it has Ubuntu installed on it, with the Unity interface is a little bit less appealing to me. I started my Linux journey with RedHat Linux, which became Fedora, then I moved to Ubuntu with KDE and ran that for many years. I also ran Ubuntu Netbook Remix on my ASUS at one point, around the time that Unity was released. It was OK for a while but didn't run well as new releases were installed, and I replaced it with the late lamented CrunchBang Linux, which I ran for several years.
+
I will persevere with Ubuntu and Unity for the moment, but I think I will eventually install Debian on the laptop. I run Debian Testing on my desktop system, and really enjoy the fact that it keeps up to date and delivers very recent versions of what I need without having to perform a version upgrade twice a year.
+
I might also consider trying out Slackware or perhaps even Arch Linux on it.
+
The Kratos is a much lighter machine than the old HP, at 2.5Kg and this is noticeable and very welcome.
+
I'm not a seasoned laptop user but it seems to me that the keyboard on the Kratos is rather unlike my other machines. I'm not much of a typist, but the keys do seem a little small given the available space.
+
The following pictures of the three keyboards are not to scale, but might give some idea of what I mean:
+
+The keyboard of the Entroware Kratos
+
+The keyboard of the ASUS eeePC
+
+The keyboard of the HP G62
+
Would I recommend Entroware?
+
I was asked this question on TuxJam, and my feeling is: Yes, I would.
+
I really like the fact that they are selling and supporting systems which run Linux. There are not and have not been many companies who have done this in the UK as far as I know.
+
A number of UK companies offering systems without an operating system or with Linux already installed are listed on the TuxJam website.
+
I have always tried to buy hardware with Linux pre-installed in recent years. Back in 2008 I bought a Linux desktop from a company called EfficientPC in London. It was a Core Duo system with 1GB of memory and a 500GB hard disk, with Kubuntu installed, and I ran it for about five years. I don't think they are still in business (or the name has been re-used), but they did a great job of building such systems.
+
To my mind Entroware are offering some highly desirable systems, and long may they continue to do so.
+
+
diff --git a/eps/hpr2011/hpr2011_demo3.sh b/eps/hpr2011/hpr2011_demo3.sh
new file mode 100755
index 0000000..6da821d
--- /dev/null
+++ b/eps/hpr2011/hpr2011_demo3.sh
@@ -0,0 +1,7 @@
+#!/bin/bash
+echo "Output file: /tmp/$$"
+for i in {1..10}; do
+ w=$(shuf -n1 /usr/share/dict/words)
+ w=${w%\'s}
+ echo "$i: $w"
+done | tee /tmp/$$ | sed -e 'N;1,5D'
diff --git a/eps/hpr2011/hpr2011_demo4.sed b/eps/hpr2011/hpr2011_demo4.sed
new file mode 100755
index 0000000..e0921b4
--- /dev/null
+++ b/eps/hpr2011/hpr2011_demo4.sed
@@ -0,0 +1,7 @@
+1,2H
+2{
+ g
+ s/\`/[/gM
+ s/\'/]/gM
+ p
+}
diff --git a/eps/hpr2011/hpr2011_full_shownotes.html b/eps/hpr2011/hpr2011_full_shownotes.html
new file mode 100755
index 0000000..5d11449
--- /dev/null
+++ b/eps/hpr2011/hpr2011_full_shownotes.html
@@ -0,0 +1,294 @@
+
+
+
+
+
+
+
+ Introduction to sed - part 4 (HPR Show 2011)
+
+
+
+
+
+
+
+
+
Introduction to sed - part 4 (HPR Show 2011)
+
Dave Morriss
+
+
+
+
+
+
+
Table of Contents
+
+
+
Introduction
+
In the last episode we looked at some of the more frequently used sed commands, having spent previous episodes looking at the s command, and we also covered the concept of line addressing.
+
In this episode we will look at how sed really works in all the gory details, examine some of the remaining sed commands and begin to use what we know to build useful sed programs.
+
How sed REALLY works
+
In Episode 1 we looked briefly at the pattern space where sed holds the incoming data while commands are executed on it. In this episode we will look at this data buffer and its counterpart, the hold space in more detail.
+
When considering the pattern space in earlier episodes it was simpler to visualise it as a relatively small storage area, capable of holding one line from the input stream. In fact, it is a buffer which can hold an arbitrarily large amount of data, though it is normally used to hold just the latest input line.
+
As you know from previous discussions, the pattern space is processed in the following cycle:
+
+
A line is read from the input stream, the trailing newline is removed and the result is stored in the pattern space
+
The commands making up the sed script are executed (as appropriate regarding addressing, etc.)
+
When command execution has finished the pattern space is printed to the output stream, after adding the trailing newline (if it was removed). This auto printing does not happen if the -n command line option is in effect.
+
The cycle then begins again, with the pattern space being cleared before the next line is read. This part of the cycle can be altered by a few special commands, which we will look at later.
+
+
The hold space on the other hand, is a storage buffer like the pattern space which is not affected by the cycle described above. Data placed in the hold space remains there until sed exits or it is deleted explicitly. Commands exist which can move data to and from the hold space, as we will see.
+
Commands
+
This episode is following the GNU sed manual, particularly the section about less frequently-used commands. Some of the commands in this category in the manual have been omitted in this series though, and some will be held over to the next episode.
+
The y command
+
The y command transforms (transliterates) characters. The format is:
+
y/source-chars/dest-chars/
+
It operates on the pattern space transliterating any characters which match any of the source-chars with the corresponding character in dest-chars.
+
The delimiter used to separate the two parts is normally a slash (/), but it can be changed as we saw with the s command without preceding the first instance with a backslash.
+
If the delimiter is used in either of the two lists of characters it must be preceded by a backslash (\) to escape it.
+
The two lists must be the same length (not counting backslash escapes).
+
The y command has no flags.
+
In the following example the first two lines of the example file are processed with a y command. The lower-case vowels in these two lines are converted to the next vowel in the sequence, so 'a' becomes 'e', 'e' becomes 'i' and so forth:
+
$ sed -ne '1,2{y/aeiou/eioua/;p}' sed_demo1.txt
+Heckir Pabloc Redou (HPR) os en Intirnit Redou shuw (pudcest) thet riliesis
+shuws iviry wiikdey Mundey thruagh Frodey. HPR hes e lung loniegi guong beck tu
+
The next example uses nl as in earlier episodes to number the lines so you can see what has been done to them. The script contains two groups, both of which perform transliterations. The first group is controlled by an address expression which operates on odd lines, and the second group operates on even lines. The y commands perform similar vowel transformations as the previous example, but they cater for upper-case vowels as well. The vowel sequences are "rotated" differently for the even versus the odd lines. Only the first five lines of output are shown here.
+
$ nl -w3 -ba sed_demo1.txt | sed -ne '1~2{y/aeiouAEIOU/eiouaEIOUA/;p};2~2{y|aeiouAEIOU|iouaeIOUAE|;p}'
+1 Heckir Pabloc Redou (HPR) os en Ontirnit Redou shuw (pudcest) thet riliesis
+2 shaws ovory wookdiy Mandiy thraegh Frudiy. HPR his i lang lunoigo gaung bick ta
+3 Redou FriiK Emiroce, Bonery Rivulatoun Redou & Onfunumocun, end ot os e dorict
+4 cantuneituan af Twitoch ridua. Ploiso luston ta StinkDiwg's "Untradectuan ta
+5 HPR" fur muri onfurmetoun.
+
The = command
+
This is a GNU extension. It causes sed to print out the line number, followed by a newline. The number represents a count of the lines read on the input stream.
+
The command can be preceded by any of the address types we saw in episode 2.
+
The following example uses the = command to print out the number of the last line of the input file:
+
$ sed -ne '$=' sed_demo1.txt
+13
+
The next example prints out the line number followed by the line. Note how the newline after the number means that it is not on the same line as the text:
+
$ sed -ne '${=;p}' sed_demo1.txt
+13
+detail on a topic.
+
The usual issues about contiguous or separate files apply here and using the -s command line option has the following effect:
+
$ sed -sne '${=;p}' sed_demo1.txt sed_demo2.txt
+13
+detail on a topic.
+26
+contribute one show a year.
+
Commands that operate on the pattern space
+
The following four commands perform actions on the pattern space. Their usefulness can be difficult to appreciate without examples, but we need to know about them, and the other set of hold space commands that follow before we can begin building such examples.
+
The D command
+
This command deletes from the pattern space in a related way to the d command. However, it only deletes up to the first newline. Then the cycle is restarted using the resulting pattern space and without reading any input.
+
If there is no newline the pattern space is deleted, and a new cycle is begun with a new input line being read. Under these circumstances, the D command behaves as the d command does.
+
The command can be preceded by any of the address types we saw in episode 2.
+
The N command
+
This command adds the next line of input to the pattern space, preceded by a newline. If there is no more input then sed exits without processing any more commands.
+
The command can be preceded by any of the address types we saw in episode 2.
+
The P command
+
This command prints out the contents of the pattern space up to the first newline.
+
The command can be preceded by any of the address types we saw in episode 2.
+
The l command
+
Format: l n
+
This command can be a useful tool for debugging a sed script since it shows what is currently in the pattern space.
+
The pattern space is "dumped" in fixed-length lines, where the length is controlled by the numeric value of n. There is a command-line option -l N or --line-length=N which provides a value if n is not provided with the command. The default value is 70. A value of 0 prevents line wrapping.
+
The n option to the command is a GNU sed extension.
+
The l command shows non-printable characters as sequences such as '\n' and '\t'. Each wrapped line ends with a '\' and the end of each line is shown by a '$' character.
+
The command can be preceded by any of the address types we saw in episode 2.
+
Running the l command on lines 1 and 2 of sed_demo1.txt with a width of 80 we see:
+
$ sed -ne '1,2l80' sed_demo1.txt
+Hacker Public Radio (HPR) is an Internet Radio show (podcast) that releases$
+shows every weekday Monday through Friday. HPR has a long lineage going back to$
+
Using the N command to accumulate the two lines in the pattern space before dumping it (using the default width) we see:
+
$ sed -ne '1,2{N;l}' sed_demo1.txt
+Hacker Public Radio (HPR) is an Internet Radio show (podcast) that re\
+leases\nshows every weekday Monday through Friday. HPR has a long lin\
+eage going back to$
+
Example using the pattern space
+
This example demonstrates the use of the N and D commands.
+
for i in {1..10}; do
+ w=$(shuf -n1 /usr/share/dict/words)
+ w=${w%\'s}
+ echo "$i: $w"
+done | tee /tmp/$$ | sed -e 'N;1,5D'
+
The loop iterates 10 times, using variable i. For each iteration variable w is set to a random word from the system dictionary /usr/share/dict/words. Since many of these words end in "'s" we remove such endings. The result is printed, with the iteration number in front.
+
The stream of 10 words are sent to the tee1 command which saves a copy (this command writes to a file and also copies its input to STDOUT). The file chosen is a temporary file /tmp/$$ where Bash replaces the $$ symbol with the process id number of the current process.2
+
The stream of numbered words is also sent to sed and each line is appended to the pattern space. For input lines 1 to 5 inclusive the D command deletes a line from the accumulated lines in the pattern space, and the result is the last 5 lines remain at the end and are auto-printed.
+
During testing, this type of pipeline can be written to a file and run as a Bash script, or it can be written out on one line, as I normally do:
+
for i in {1..10}; do w=$(shuf -n1 /usr/share/dict/words); w=${w%\'s}; echo "$i: $w"; done | tee /tmp/$$ | sed -e 'N;1,5D'
+
The temporary file is useful to check the before and after states.
+
The Bash script discussed here is available as demo3.sh on the HPR website.
+
Commands to transfer to and from the hold space
+
The next five commands move lines to and from the hold space.
+
The h command
+
This command replaces the contents of the hold space with the contents of the pattern space. After executing the command the original contents of the hold space will be lost, and the contents of the pattern space will be in the hold space and the pattern space.
+
The command can be preceded by any of the address types we saw in episode 2.
+
The H command
+
This command appends the contents of the pattern space to the hold space preceded by a newline. The contents of the pattern space will not be affected by this process.
+
The command can be preceded by any of the address types we saw in episode 2.
+
The g command
+
This command replaces the contents of the pattern space with the contents of the hold space. After executing the command the original contents of the pattern space will be lost, and the two buffers will have the same contents.
+
The command can be preceded by any of the address types we saw in episode 2.
+
The G command
+
This command appends the contents of the hold space to the pattern space preceded by a newline. The contents of the hold space will not be affected by this process.
+
The command can be preceded by any of the address types we saw in episode 2.
+
The x command
+
This command exchanges the contents of the hold and pattern spaces.
+
The command can be preceded by any of the address types we saw in episode 2.
+
Flags and modifiers we omitted earlier
+
When we looked at the s command in episodes 1 and 2 we encountered a subset of the flags, and when we were looking at line addresses in episode 3 we missed out one of the modifiers.
+
One of the missing flags to s was 'M' (and 'm' which is a synonym, just as 'I' and 'i' are) and the missing modifier was 'M', and they all affect regular expression matching in the same way.
+
The 'M' modifier/flag stands for multi-line and is useful in the case where the pattern space contains more than one line. It is a GNU sed extension.
+
The modifier causes '^' to match empty string after a newline and '$' to match the empty string before a newline. There are also special metacharacters which match the beginning and end of the buffer. These are: '\`' for the beginning and "\'" for the end
+
The following brief examples demonstrate the features of the 'M' modifier.
+
Here we have accumulated two lines in the hold space, which have then been transferred to the pattern space. We use s commands (with a 'g' modifier, which is superfluous in this example, but useful later3) to add square brackets at the beginning and end:
+
$ sed -ne '1,2H;2{g;s/^/[/g;s/$/]/g;p}' sed_demo1.txt
+[
+Hacker Public Radio (HPR) is an Internet Radio show (podcast) that releases
+shows every weekday Monday through Friday. HPR has a long lineage going back to]
+
Remember that there's an extra newline at the start of the pattern space due to the way the H command works. This example shows that there is only one "beginning" and "end" in this buffer.
+
If we then modify both of the s commands with the 'M' flag/modifier we get:
+
$ sed -ne '1,2H;2{g;s/^/[/gM;s/$/]/gM;p}' sed_demo1.txt
+[]
+[Hacker Public Radio (HPR) is an Internet Radio show (podcast) that releases]
+[shows every weekday Monday through Friday. HPR has a long lineage going back to]
+
Now '^' and '$' relate to newlines and surround each of the lines.
+
Now, to indicate the start and end of the buffer we need to use '\`' and "\'". However, we have a problem since these characters are significant to the Bash shell, so we move to placing these commands in a file called demo4.sed:
+
$ cat demo4.sed
+1,2H
+2{
+ g
+ s/\`/[/gM
+ s/\'/]/gM
+ p
+}
+$ sed -nf demo4.sed sed_demo1.txt
+[
+Hacker Public Radio (HPR) is an Internet Radio show (podcast) that releases
+shows every weekday Monday through Friday. HPR has a long lineage going back to]
+
The demo4.sed file is available on the HPR website
+
Examples
+
Example 1
+
This example mainly demonstrates the use of the P and y commands:
+
$ sed -ne '1,2{s/$/\n-/;P;y/aeiou/eioua/;p}' sed_demo1.txt
+Hacker Public Radio (HPR) is an Internet Radio show (podcast) that releases
+Heckir Pabloc Redou (HPR) os en Intirnit Redou shuw (pudcest) thet riliesis
+-
+shows every weekday Monday through Friday. HPR has a long lineage going back to
+shuws iviry wiikdey Mundey thruagh Frodey. HPR hes e lung loniegi guong beck tu
+-
+
Auto-printing is turned off, and the sed commands are all grouped together and controlled by an address range that covers the first two lines of the file.
+
First an s command adds a newline followed by a hyphen to the current line in the pattern space. The following P command prints out the line that has just been edited in the pattern space, up to the newline that we just added (so we don't see the hyphen).
+
Then a y command operates on the line, which is still in the pattern space. It changes all the vowels by shifting them to the next in the alphabetic order - 'a' becomes 'e' and so forth. A final p command prints the edited line, which now generates two lines because of the newline and hyphen we added at the start.
+
Example 2
+
Here we use H and G to make use of the hold space and pattern space:
+
$ sed -e '1,/^$/{H;d};${G;s/\n$//}' sed_demo1.txt
+What differentiates HPR from other podcasts is that the shows are
+produced by the community - fellow listeners like you. There is no
+restrictions on how long the show can be, nor on the topic you can
+cover as long as they "are of interest to Hackers". If you want to see
+what topics have been covered so far just have a look at our Archive.
+We also allow for a series of shows so that host(s) can go into more
+detail on a topic.
+
+Hacker Public Radio (HPR) is an Internet Radio show (podcast) that releases
+shows every weekday Monday through Friday. HPR has a long lineage going back to
+Radio FreeK America, Binary Revolution Radio & Infonomicon, and it is a direct
+continuation of Twatech radio. Please listen to StankDawg's "Introduction to
+HPR" for more information.
+
On every line from number 1 to the first blank line the first group of commands is run. The H command appends the input line to the hold space with a newline on the front of it. The d command deletes the line from the pattern space, preventing auto-printing of it. All other lines outside this address range are printed automatically.
+
When the last line is encountered the another command group is run. The G command appends the hold space to the pattern space, but an extra blank line will have been generated on the front by the first H command. The s command removes the last newline from the pattern space balancing this addition. The pattern space will then be auto-printed.
+
The effect will be to take the first paragraph from the text and move it to the end.
+
Example 3
+
In Episode 2 I speculated on a solution to the problem of joining all of the lines in a text file to make one long line. The following example offers a solution to this:
The example runs the sed command in a command substitution expression such that the variable x contains the result (this subject was covered in my episode entitled "Some further Bash tips"). The length of the variable is then reported.
+
The sed command turns off auto-printing. Within the sed script itself the H command is run on every line, and this causes every input line to be appended to the hold space with a newline on the front.
+
When the last line of the file is encountered a group of commands is run. The g command replaces the pattern space with the hold space. The pattern space now contains the newline that was appended before the first input line was saved. The first s command removes this from the front of the pattern space. The second s command replaces all the newlines in the pattern space with a space, thereby making one continuous string. This is then printed with the p command.
+
As a point of interest, the resulting text is the same length as the original, as can be proved by the following:
+
$ y=$(cat sed_demo1.txt)
+$ echo ${#y}
+768
+
Quiz
+
Pig Latin / Igpay Atinlay
+
Use the test data in sed_demo1.txt from Episode 1 and, using a single invocation of sed, convert the first line to Pig Latin. The rules of how to generate this are simple in essence, though there are some exceptions (see the Wikipedia entry for the full details). We will just go for the simplest solution in this quiz, though if you want to be more advanced in your submission please go ahead.
+
In brief the rules are:
+
+
Take the first letter of each word and place it at the end, followed by 'ay'. Thus 'pig' becomes 'igpay' and 'latin' becomes 'atinlay'.
+
Skip 1-, 2- and 3-letter words, since 'a' -> 'aay' is not wanted.
+
Do not bother about capitals. Ideally 'Latin' should become 'Atinlay', but sed may not be the best tool to use to do that!
+
+
I will include my solution to this problem in the next episode. I hope you will be able to come up with a much better answer than I do!
+
+
Note: If you submit a working solution you may be eligible for a prize of some HPR stickers. Send your submission to me. My email address is available here after removing the anti-spam measures. The competition will close after I have posted episode 5 in this series to the HPR site.
My explanation of tee in the audio was less than clear. I should have said that everything sent through the command is written both to the file and to STDOUT. I think the text explains it though.↩
+
If you run the script demo3.sh and try to look at the temporary file you will not see it. This is because the PID generated by $$ is local to the process running the script. I have modified the script to report the name of the file to allow you to examine it once it has run.↩
+
I used the 'g' flags here just because I used them in the next example, they don't actually do anything. With hindsight, it might have been better if I had removed them in this one.↩
+
+
+
+
+
+
+
diff --git a/eps/hpr2023/hpr2023_RPi_server_collection.png b/eps/hpr2023/hpr2023_RPi_server_collection.png
new file mode 100755
index 0000000..cc195b4
Binary files /dev/null and b/eps/hpr2023/hpr2023_RPi_server_collection.png differ
diff --git a/eps/hpr2023/hpr2023_adafruit-pi-externalroot-helper.txt b/eps/hpr2023/hpr2023_adafruit-pi-externalroot-helper.txt
new file mode 100755
index 0000000..59896da
--- /dev/null
+++ b/eps/hpr2023/hpr2023_adafruit-pi-externalroot-helper.txt
@@ -0,0 +1,149 @@
+#!/usr/bin/env bash
+
+# adafruit-pi-externalroot-helper
+#
+# Configure a Raspbian system to use an external USB drive as root filesystem.
+#
+# See README.md for details and sources.
+
+set -e
+
+function print_version() {
+ echo "Adafruit Pi External Root Helper v0.1.0"
+ exit 1
+}
+
+function print_help() {
+ echo "Usage: $0 -d [target device]"
+ echo " -h Print this help"
+ echo " -v Print version information"
+ echo " -d [device] Specify path of device to convert to root"
+ echo
+ echo "You must specify a device. See:"
+ echo "https://learn.adafruit.com/external-drive-as-raspberry-pi-root"
+ exit 1
+}
+
+
+# Display an error message and quit:
+function bail() {
+ FG="1;31m"
+ BG="40m"
+ echo -en "[\033[${FG}\033[${BG}error\033[0m] "
+ echo "$*"
+ exit 1
+}
+
+# Display an info message:
+function info() {
+ task="$1"
+ shift
+ FG="1;32m"
+ BG="40m"
+ echo -e "[\033[${FG}\033[${BG}${task}\033[0m] $*"
+}
+
+if [[ $EUID -ne 0 ]]; then
+ bail "must be run as root. try: sudo adafruit-pi-externalroot-helper"
+fi
+
+# Handle arguments:
+args=$(getopt -uo 'hvd:' -- $*)
+[ $? != 0 ] && print_help
+set -- $args
+
+for i
+do
+ case "$i"
+ in
+ -h)
+ print_help
+ ;;
+ -v)
+ print_version
+ ;;
+ -d)
+ target_drive="$2"
+ echo "Target drive = ${2}"
+ shift
+ shift
+ ;;
+ esac
+done
+
+if [[ ! -e "$target_drive" ]]; then
+ bail "Target ${target_drive} must be existing device (use -d /dev/foo to specify)"
+fi
+
+info "start" "Will create new ext4 filesystem on ${target_drive}"
+info "start" "If there is data on ${target_drive}, it will be lost."
+read -p "Really proceed? (y)es / (n)o " -n 1 -r
+echo
+if [[ ! $REPLY =~ ^[Yy]$ ]]
+then
+ echo "Quitting."
+ exit
+fi
+
+export target_partition="${target_drive}1"
+
+info "dependencies" "Installing gdisk, rsync, and parted."
+ # All except gdisk are probably installed, but let's make sure.
+ apt-get install gdisk rsync parted
+
+info "fs create" "Creating ${target_partition}"
+ # The alternative here seems to be to pipe a series of commands
+ # to fdisk(1), similar to how it's done by raspi-config:
+ # https://github.com/asb/raspi-config/blob/3a5d75340a1f9fe5d7eebfb28fee0e24033f4fd3/raspi-config#L68
+ # This seemed to work, but I was running into weirdness because
+ # that doesn't seem to make a GPT, and later on I couldn't get
+ # partition unique GUID from gdisk. parted(1) is also a nice
+ # option because it's scriptable and allows partition sizes to
+ # be specified in percentages.
+ parted --script "${target_drive}" mklabel gpt
+ parted --script --align optimal "${target_drive}" mkpart primary ext4 0% 100%
+
+info "fs create" "Creating ext4 filesystem on ${target_partition}"
+ mkfs -t ext4 -L rootfs "${target_partition}"
+
+info "fs id" "Getting UUID for target partition"
+ eval `blkid -o export "${target_partition}"`
+ export target_partition_uuid=$UUID
+
+info "fs id" "Getting Partition unique GUID for target filesystem"
+ # Ok, so the only way I was able to get this was using gdisk.
+ # I don't quite understand the different between this value and
+ # the one you can get with blkid and tune2fs (which seem to give
+ # the same thing). Nevertheless, this seems to be necessary to
+ # get a value that can be used in cmdline.txt. I think it's a
+ # GUID specifically for the GPT partition table entry.
+ export partition_unique_guid=`echo 'i' | sudo gdisk "${target_drive}" | grep 'Partition unique GUID:' | awk '{print $4}'`
+
+info "fs id" "Target partition UUID: ${target_partition_uuid}"
+info "fs id" "Partition unique GUID: ${partition_unique_guid}"
+
+info "fs copy" "Mounting ${target_partition} on /mnt"
+ mount "${target_partition}" /mnt
+
+info "fs copy" "Copying root filesystem to ${target_partition} with rsync"
+info "fs copy" "This will take quite a while. Please be patient!"
+ rsync -ax / /mnt
+
+info "boot config" "Configuring boot from {$target_partition}"
+ # rootdelay=5 is likely not necessary here, but seems to do no harm.
+ cp /boot/cmdline.txt /boot/cmdline.txt.bak
+ sed -i "s|root=\/dev\/mmcblk0p2|root=PARTUUID=${partition_unique_guid} rootdelay=5|" /boot/cmdline.txt
+
+info "boot config" "Commenting out old root partition in /etc/fstab, adding new one"
+ # These changes are made on the new drive after copying so that they
+ # don't have to be undone in order to switch back to booting from the
+ # SD card.
+ sed -i '/mmcblk0p2/s/^/#/' /mnt/etc/fstab
+ echo "/dev/disk/by-uuid/${target_partition_uuid} / ext4 defaults,noatime 0 1" >> /mnt/etc/fstab
+
+info "boot config" "Ok, your system should be ready. You may wish to check:"
+info "boot config" " /mnt/etc/fstab"
+info "boot config" " /boot/cmdline.txt"
+info "boot config" "Your new root drive is currently accessible under /mnt."
+info "boot config" "In order to restart with this drive at /, please type:"
+info "boot config" "sudo reboot"
diff --git a/eps/hpr2023/hpr2023_full_shownotes.html b/eps/hpr2023/hpr2023_full_shownotes.html
new file mode 100755
index 0000000..59e50d9
--- /dev/null
+++ b/eps/hpr2023/hpr2023_full_shownotes.html
@@ -0,0 +1,179 @@
+
+
+
+
+
+
+
+ Setting up my Raspberry Pi 3 (HPR Show 2023)
+
+
+
+
+
+
+
+
+
Setting up my Raspberry Pi 3 (HPR Show 2023)
+
Dave Morriss
+
+
+
+
+
+
+
Table of Contents
+
+
+
Introduction
+
I bought a Raspberry Pi 3 in March 2016, soon after it was released. I want to use it as a server since it's the fastest Pi that I own, so I have tried to set it up in the best way for that role.
+
In this episode I describe what I did in case you want to do something similar.
+
Hardware
+
The equipment I bought:
+
+
The Raspberry Pi model 3
+
A Pimoroni Pibow 3 Ninja case
+
+
This is an acrylic case made from laser-cut layers, suitable for the RPi3 and with room to contain a HAT if required. Apparently it has better ventillation than RPi2 cases
+
+
A 2.5 amp Universal Power Supply
+
+
The ones I already had are 2 amp or less
+
+
A 6mm heat sink (with enough clearance in case I want to fit a HAT on the Pi)
+
+
The Pi3 generates more heat I'm told
+
+
A USB to SATA converter
+
+
Has 2 USB connectors with one for boosting the power; not needed on the Pi3
+
+
A 120GB internal SSD
+
A 32GB microSD Class 10 card
+
+
Probably overkill!
+
+
+
The 2.5 amp PSU is necessary because I'm powering the SSD from the Pi and lower rated supplies will be inadequate since the Pi3 draws more current itself.
+
There are links at the end, but these merely document my choices and shouldn't be taken as recommendations or an attempt to sell anything.
+
Setup
+
I installed the latest Raspbian (Jessie) on the microSD card. I plugged in my old VGA monitor which I use occasionally for this type of thing. I have an HDMI/VGA adaptor cable for doing this. I connected a USB keyboard and mouse and made sure the Pi was good to go.
+
The main things I do when configuring a Pi are:
+
+
Configure my router to give it a fixed IP address, and add it to my various /etc/hosts files on other systems. This is for the LAN connection. I run all my Pi's directly from my router or from one of four Gigabit switches around the house.
+
Install vim on the Pi
+
Ensure an ssh server is running - it wasn't always there in older versions.
+
Create an account, and add it to the sudoers file (with visudo). I use the same account on all my Pi's so I can administer them with cssh
+
Share my ssh key so I can login without a password
+
Change the password on the pi account
+
Use raspi-config to make the Pi boot to text mode
+
Set the Pi up in its final location in headless mode
+
+
In this case I added some further steps to the setup to use the SSD as the root disk.
+
Running off the SSD
+
I had set this up within the last year for another one of my Raspberry Pi's, a Pi2. In this one I'd added an external 1TB disk powered through a powered USB hub. In doing this I used the Adafruit tutorial which describes this process. This tutorial is based on the article "HOWTO: Move the filesystem to a USB stick/Drive", by paulv on the Raspberry Pi forums.
+
As an aside, I'd bought a FLIRC aluminium case for this Pi2 because it incorporated a heatsink and was quite attractive and seemed very robust. I believe that this case can't be used with the Pi3 because it will block the WiFi and Bluetooth, but I can't find any confirmation that this is so. I haven't tried swapping Pi's between cases.
+
To run the Adafruit solution I did the following:
+
+
Connected everything up using the Raspbian SD card
+
Determined what device my SSD was connected to - /dev/sda1
+
The script adafruit-pi-externalroot-helper is a Bash script and is clearly written. It needs to be run as root since it performs low-level actions.
+
In case you are interested in what the script actually does, I have described it here. It is made available under an MIT licence, so I have included a copy on the HPR website so that you can view it as you listen or as you read these notes.
+
If you are not interested then you can skip to the next section of course.
+
+
The script starts by ensuring that the necessary packages have been installed, using apt-get to install them: gdisk, rsync and parted.
+
It then uses parted (which I didn't know about) to make a "gpt" partition table, then to make a primary partition of type "ext4" occupying the entire disk.
+
Then it uses mkfs to make an ext4 file system labelled "rootfs" on this partition.
+
Next it uses blkid to find the UUID for the partition.
+
+
This tool when called in this way returns a list of values in the format key=value.
+
The script uses an eval call with the blkid command in a command substitution so that these keys and values create variables in the environment.
+
Since there is a line "UUID=value" in the output (amongst others) a variable of that name will be created with the value for the partition.
+
The result is stored in a variable target_partition_uuid
+
+
The script then collects the "Partition unique GUID" from the device, since this is needed to make the boot sequence use the external disk rather than the SD card.
+
+
It does this by echoing the 'i' command (partition information) to gdisk, running against the device, not the partition.
+
In the output there is a line beginning with "Partition unique GUID:" which is searched for with grep and the 4th element (GUID) is extracted with awk.
+
The result of all of this is that a variable partition_unique_guid is set to the GUID.
+
+
The script reports the UID and GUID values it has collected.
+
Next it mounts the partition on a mount point called /mnt.
+
Having done that it uses rsync to copy everything under the root directory (/) on the SD card to /mnt, the external disk. This is not a quick process since SD cards are not particularly quick.
+
Now the script edits the file /boot/cmdline.txt having first made a backup copy in /boot/cmdline.txt.bak.
+
+
Since it's using sed to do the edit it could have achieved that all in the same command (as you'll know if you've been following my sed series!!). However, doing it this way might be slightly safer if the sed command fails for whatever reason.
+
The sed command changes the definition of the root device in boot/cmdline.txt to the disk rather than the SD card and sets a rootdelay=5 value to delay 5 seconds for the disk to become ready.
+
The '-i' sed command line option is used to perform the edit in-place. The script could have generated a backup at this point of course.
+
Note that the sed command is enclosed in double quotes. This allows Bash to substitute in the value of the partition_unique_guid variable.
+
Note also that the author escapes the slashes in the 's' command where this is not necessary because '|' has been used as a delimiter.
+
+
Next the script edits the copied /etc/fstab (currently under /mnt) to mount the external disk as / rather than the SD card.
+
+
It runs sed to find the line containing the SD card partition and comments it out.
+
It then appends a line to the file with a new mount point using the UUID collected earlier.
+
+
Now all is complete and the user is prompted to check what has been done before rebooting the Pi.
+
+
I saved the transcript of what went on when I ran this script and have made it available on the HPR site in case it is useful to anyone.
+
Reverting to the SD card
+
Since all that has changed on the SD card is the file /boot/cmdline.txt, and we have a backup, reversion can be achieved by replacing the changed file with the backup. Note that any updates and additions made on the external disk will not be available any more and will need to be re-applied or copied to the SD card if you do this.
+
+
diff --git a/eps/hpr2023/hpr2023_session_log.txt b/eps/hpr2023/hpr2023_session_log.txt
new file mode 100755
index 0000000..ea4ad46
--- /dev/null
+++ b/eps/hpr2023/hpr2023_session_log.txt
@@ -0,0 +1,65 @@
+dave@rpi5:~ $ git clone https://github.com/adafruit/Adafruit-Pi-ExternalRoot-Helper.git
+Cloning into 'Adafruit-Pi-ExternalRoot-Helper'...
+remote: Counting objects: 53, done.
+remote: Total 53 (delta 0), reused 0 (delta 0), pack-reused 53
+Unpacking objects: 100% (53/53), done.
+Checking connectivity... done.
+
+dave@rpi5:~/Adafruit-Pi-ExternalRoot-Helper $ sudo su -
+[sudo] password for dave:
+root@rpi5:~# cd ~dave/Adafruit-Pi-ExternalRoot-Helper/
+root@rpi5:/home/dave/Adafruit-Pi-ExternalRoot-Helper# ./adafruit-pi-externalroot-helper -d /dev/sda
+Target drive = /dev/sda
+[start] Will create new ext4 filesystem on /dev/sda
+[start] If there is data on /dev/sda, it will be lost.
+Really proceed? (y)es / (n)o y
+[dependencies] Installing gdisk, rsync, and parted.
+Reading package lists... Done
+Building dependency tree
+Reading state information... Done
+parted is already the newest version.
+rsync is already the newest version.
+rsync set to manually installed.
+The following NEW packages will be installed:
+ gdisk
+0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
+Need to get 193 kB of archives.
+After this operation, 799 kB of additional disk space will be used.
+Do you want to continue? [Y/n]
+Get:1 http://mirrordirector.raspbian.org/raspbian/ jessie/main gdisk armhf 0.8.10-2 [193 kB]
+Fetched 193 kB in 0s (321 kB/s)
+Selecting previously unselected package gdisk.
+(Reading database ... 127195 files and directories currently installed.)
+Preparing to unpack .../gdisk_0.8.10-2_armhf.deb ...
+Unpacking gdisk (0.8.10-2) ...
+Processing triggers for man-db (2.7.0.2-5) ...
+Setting up gdisk (0.8.10-2) ...
+[fs create] Creating /dev/sda1
+[fs create] Creating ext4 filesystem on /dev/sda1
+mke2fs 1.42.12 (29-Aug-2014)
+Creating filesystem with 29304832 4k blocks and 7331840 inodes
+Filesystem UUID: 08fcdeb6-7df7-4aeb-a21b-c48438cfb828
+Superblock backups stored on blocks:
+ 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
+ 4096000, 7962624, 11239424, 20480000, 23887872
+
+Allocating group tables: done
+Writing inode tables: done
+Creating journal (32768 blocks): done
+Writing superblocks and filesystem accounting information: done
+
+[fs id] Getting UUID for target partition
+[fs id] Getting Partition unique GUID for target filesystem
+[fs id] Target partition UUID: 08fcdeb6-7df7-4aeb-a21b-c48438cfb828
+[fs id] Partition unique GUID: 6F8E0B29-6C85-490F-A316-1092BC2A7D86
+[fs copy] Mounting /dev/sda1 on /mnt
+[fs copy] Copying root filesystem to /dev/sda1 with rsync
+[fs copy] This will take quite a while. Please be patient!
+[boot config] Configuring boot from {/dev/sda1}
+[boot config] Commenting out old root partition in /etc/fstab, adding new one
+[boot config] Ok, your system should be ready. You may wish to check:
+[boot config] /mnt/etc/fstab
+[boot config] /boot/cmdline.txt
+[boot config] Your new root drive is currently accessible under /mnt.
+[boot config] In order to restart with this drive at /, please type:
+[boot config] sudo reboot
diff --git a/eps/hpr2045/hpr2045_full_shownotes.html b/eps/hpr2045/hpr2045_full_shownotes.html
new file mode 100755
index 0000000..9986266
--- /dev/null
+++ b/eps/hpr2045/hpr2045_full_shownotes.html
@@ -0,0 +1,370 @@
+
+
+
+
+
+
+
+ Some other Bash tips (HPR Show 2045)
+
+
+
+
+
+
+
+
+
Some other Bash tips (HPR Show 2045)
+
Dave Morriss
+
+
+
+
+
+
+
Table of Contents
+
+
+
Expansion
+
As we saw in the last episode 1951 (and others in this sub-series) there are eight types of expansion applied to the command line in the following order:
+
+
Brace expansion (we looked at this subject in episode 1884)
We will look at process substitution and word splitting in this episode but since there is a lot to cover in these subjects, we'll save pathname expansion for the next episode.
+
Process substitution
+
The process substitution feature in Bash is a way in which data can be passed to or from a process. A process is most simply thought of as one or more commands running together.
+
Not all Unix systems can implement process substitution since it either uses what are known as "named pipes" (or FIFOs) or special files called /dev/fd/<n> (where <n> represents a number). These are temporary data storage structures which separate processes are able to access for reading or writing.
+
The name of this pipe or /dev/fd/<n> "interconnecting file" is passed as an argument to the initial command. This can be a difficult concept to understand, and we'll look at it in more detail soon.
+
There are two forms of process substitution:
+
>(command list)
+<(command list)
+
The first form receives input which has been sent via the interconnecting file and passes it to the command list. The second form generates output from the command list and passes it on via the interconnecting file which needs to be read to receive the result. In both cases there must be no spaces between the '<' or the '>' and the open parenthesis.
+
Some experiments with simplified commands might help to clarify this. First consider a simple pipeline:
+
$ echo Test | sed -e 's/^.*$/[&]/'
+[Test]
+
This is a pipeline where the echo command generates data on its STDOUT which is passed to the sed command on its STDIN via the pipe. The sed command modifies what it receives by placing square brackets around it and passes the result to its STDOUT and the text is displayed by the shell (Bash).
+
Contrast this with process substitution. If we want to generate the same text and pass it to sed within a process we might write:
+
$ echo Test >(sed -e 's/^.*$/[&]/')
+Test /dev/fd/63
+
This does not work as expected. What is happening here is that an interconnecting file name (/dev/fd/63) has been created and passed to the echo command, with the expectation that it will be used to send data to the process. However, this is seen by the echo which has simply displayed it. The process substitution containing the sed command has received nothing.
+
This example needs to be rewritten by adding a redirection symbol (>) after the echo:
+
$ echo Test > >(sed -e 's/^.*$/[&]/')
+[Test]
+
Behind the scenes, Bash will have changed this (invisibly) into something like:
+
$ echo Test > /dev/fd/63 >(sed -e 's/^.*$/[&]/')
+
This time the interconnection between the command on the left and the process substitution expression holding the sed command has been made.
+
Note that using a pipe instead also does not work:
This is because the filename is being used on the right of the pipe symbol where a command, script or program name is expected.
+
The corresponding version of this example using the other form of process substitution is:
+
$ sed -e 's/^.*$/[&]/' <(echo Test)
+[Test]
+
Here the interconnecting file name is being provided to the sed command. To visualise this we can modify the sed script by using the F command, a GNU extension which reports the name of the input file (followed by a newline):
+
$ sed -e 'F;s/^.*$/[&]/' <(echo Test)
+/dev/fd/63
+[Test]
+
To wrap up, the Bash manual page states the following in the context of process substitution as part of the larger topic of Expansion:
+
When available, process substitution is performed simultaneously with parameter and variable expansion, command substitution, and arithmetic expansion.
+
We have now seen all of these expansion types in this series.
+
Process Substitution Examples
+
An example of the first form is:
+
$ echo "Hacker Public Radio" > >(sed -ne 's/\([A-Z]\)\([a-z]\+\)/\l\1\U\2/gp;')
+hACKER pUBLIC rADIO
+
This just uses a sed script to reverse the case of the words fed to it. It is equivalent to the very simple tests we did earlier to demonstrate the concepts of process substitution.
+
An example of the second form:
+
$ sed -ne '1~5p' <(nl -w2 -ba -s': ' sed_demo1.txt)
+ 1: Hacker Public Radio (HPR) is an Internet Radio show (podcast) that releases
+ 6:
+11: what topics have been covered so far just have a look at our Archive.
+
This is a modification of an example used in the "Introduction to sed" series. Here sed has been requested to print line 1 of its input stream, followed by every fifth line thereafter. We used the nl (number lines) command to number the incoming lines to make it clear what was happening.
+
Here the nl command is being used in a process substitution to feed data to sed on its STDIN channel (via the interconnecting file of course). Note that this is just a demonstration of the topic, it would not make sense to use these two commands together in this way.
This example uses the join command to join two lists of random words. This command expects two files (or two data sources) containing lines with identical join fields. The default field is the first.
+
The two processes provide the words, both doing the same thing. The shuf command selects 5 words from the system dictionary. We use sed to strip any apostrophe and the characters that follow from these words. We then number the words using nl. Each process will generate the same numbers, and these make the field for join.
+
Thus the first pipeline will generate five words and add the numbers 1-5 to them, and so will the second, and so join will join together word 1 from the first stream with word 1 from the second, and so forth.
+
It's a fairly useless example, but I find it amusing.
+
A final example. This one uses a Bash while loop to receive lines from a database query in a multi-command process substitution expression (think of this as an excerpt from a longer script):
+
count=0
+while read id name
+do
+ printf "%02d \"%s\"\n" $id "$name"
+ ((count++))
+done < <(echo "select id, name from series order by lower(name);" | sqlite3 hpr_talks.db)
+echo "Found $count series"
+
+51 "10 buck review"
+80 "5150 Shades of Beer"
+38 "A Little Bit of Python"
+79 "Accessibility"
+22 "All Songs Considered"
+83 "April Fools Shows"
+42 "Bash Scripting"
+.
+.
+Found 82 series
+
The query is requesting a list of series numbers and names from a SQLite database containing information about HPR shows. The query is made by using echo to pipe an SQL query expression to the sqlite3 command.
+
The output from the query is being collected by a read statement into two variables id and name. These are then printed with a printf command, though such a loop would normally be used to perform some action using these variables, not just print them.
+
Normally such a while loop would read from a file by placing a redirection symbol (<) and a file name after the done part. Here, instead of a file we are reading the output of a process which is querying a database.
+
A counter variable count is set up before the loop, and incremented1 within it. The final total is reported after the loop.
+
A sample of the output is shown after the code snippet.
+
As an aside, it is possible to build a loop using a pipeline, avoiding process substitution altogether:
+
count=0
+echo "select id, name from series order by lower(name);" |\
+sqlite3 hpr_talks.db |\
+while read id name
+do
+ printf "%02d \"%s\"\n" $id "$name"
+ ((count++))
+done
+echo "Found $count series"
+
The problem is that the final echo returns a count of zero.
+
This can be puzzling to new users of Bash - it certainly puzzled me when I first encountered it. It is because the while loop in this case runs in a separate process. Bash does not share variables between processes, so the count variable inside the loop is different from the one initialised before the loop. Thus the count outside the loop remains at zero.
+
If there is interest in looking further at this issue and other "Bash Gotchas" like it it could be included in a future episode of this (sub-)series.
+
Word splitting
+
The subject of word splitting is important to the understanding of how Bash works. Not fully appreciating this has been the downfall of many Bash script-writers, myself included.
+
Some examples of word splitting
+
By default, and looked at simplistically, Bash separates words using spaces. The following simple function can be used to report how many arguments it has received in order to demonstrate this:
+
function countargs () {
+ echo $#
+}
+
The function is called countargs for "count arguments". When it is called with arguments these are available to the function in the same way that arguments are available to a script. So, the special variable # contains the argument count.
+
Calling countargs with no arguments gives the answer 0:
+
$ countargs
+0
+
However, calling it with a string argument returns 1, showing that a string is a word in the context of a Bash script:
+
$ countargs "Mary had a little lamb"
+1
+$ countargs 'Mary had a little lamb'
+1
+
This also works if the string is empty:
+
$ countargs ""
+1
+
When variables are used things become a little more complex:
+
$ str="fish fingers and custard"
+$ countargs $str
+4
+
Here the variable str has been expanded, which has resulted in four words being passed to the function. This has happened because Bash has applied word splitting to the result of expanding str.
+
If you want to pass a string like this to a function such that it can be used in the same way as in the calling environment, then it needs to be enclosed in double quotes:
+
$ countargs "$str"
+1
+
While we are examining arguments on the command line it might be useful to create another function. This one, printargs, simply prints its arguments, one per line with the argument number in front of each.
+
function printargs() {
+ i=1
+ for arg; do
+ echo "$i $arg"
+ ((i++))
+ done
+}
+
This function uses a Bash for loop. Normally this is written as:
+
for var in list; do
+ # Do things
+done
+
However, if the in list part is omitted the loop cycles through the arguments passed to a script or function. That is what is being done here, using the variable arg to hold the values. The variable i is used to count through the argument numbers.
+
We will look at Bash loops in more detail in a later episode.
+
Using this function instead of countargs we get:
+
$ printargs $str
+1 fish
+2 fingers
+3 and
+4 custard
+
+$ printargs "$str"
+1 fish fingers and custard
+
We can see that word splitting has taken place in the first example but in the second enclosing the variable expansion in double quotes has suppressed this splitting.
+
The Internal Field Separator (IFS)
+
As we have seen, word splitting normally takes place using spaces as the word delimiter.
+
In fact, this delimiter is controlled by a special Bash variable "IFS" (which stands for "Internal Field Separator").
+
Normally, by default, the IFS variable contains three characters: a space, a tab and a newline. If the IFS variable is unset it is treated as if it holds these characters. However, if its value is null then no splitting occurs.
+
It is important to understand the difference between unset and null if you want to manipulate this variable. If a Bash variable is unset it is not defined at all. This can be achieved with the command:
+
unset IFS
+
If the variable is null then it is defined but has no value, which can be achieved with the command:
+
IFS=
+
If you are changing the IFS variable and need to change it back to its default value then this can be achieved in several ways. First, it can be defined explicitly by typing a space, tab and a newline in a string. This is not as simple as it seems though, so an alternative is this:
+
printf -v IFS " \t\n"
+
This relies on the ability of printf to generate special characters using escape sequences, and the -v option which writes the result to a variable.
+
The technique normally used in a Bash script is to save the value of IFS before changing it, then restore it later. For example:
+
oldIFS="$IFS"
+IFS=":"
+
+# Commands using changed IFS
+
+IFS="$oldIFS"
+
One method of checking what the IFS variable contains is by using the cat command with the option -A which is a shorthand way of using the options -v, -T and -E which have the following effects:
Option -E displays a $ character at the end of each line
+
+
Simply echoing IFS into cat will do it:
+
$ echo "$IFS" | cat -A
+ ^I$
+$
+
The output shows the space, followed by the ^I representation of tab followed by a dollar sign which marks the end of a line. This is because of the newline character. The next line also contains a dollar sign because this represents the line generated by the newline character.
+
An alternative is to use the od (octal dump) command. This is intended for dumping the contents of files to examine their binary formats. Here I have chosen the -a option which generates character names, and -c which shows characters as backslash escapes:
The leading numbers are offsets in the "file". Note that we used the -n option to echo to prevent it generating an extra newline.
+
So, now we know that the IFS variable contains three characters by default, and any of these will be used as a delimiter. So, it is possible to prepare strings as follows:
+
$ str=" Wynken, Blynken, and Nod one night
+> Sailed off in a wooden shoe "
+
The > character on the second line is the Bash prompt indicating that the string is incomplete because the closing quote has not yet been entered. Note the existence of leading and trailing spaces.
+
$ printargs $str
+1 Wynken,
+2 Blynken,
+3 and
+4 Nod
+5 one
+6 night
+7 Sailed
+8 off
+9 in
+10 a
+11 wooden
+12 shoe
+
Note that the embedded newline is treated as a word delimiter and the leading and trailing spaces are ignored.
+
If we quote the string (and add square brackets around it to show leading and trailing spaces) we get the following:
+
$ printargs "[$str]"
+1 [ Wynken, Blynken, and Nod one night
+Sailed off in a wooden shoe ]
+
This time the leading spaces and embedded newline are retained and printed as part of the string.
+
Finally we will look at how the IFS variable can be used to perform word splitting on other delimiters.
+
In this example we'll define a string then save the old IFS value and set a new one. We will use an underscore as the delimiter:
+
$ str="all dressed up - and nowhere to go"
+$ oldIFS="$IFS"
+$ IFS="_"
+$ printargs $str
+1 all dressed up - and nowhere to go
+
Note that the string is no longer split up since it contains none of the delimiters in the IFS variable.
+
Here we use one of the Bash features we met in episode 1648 "Bash parameter manipulation", Pattern substitution, with which we change all spaces to underscores:
+
$ printargs ${str// /_}
+1 all
+2 dressed
+3 up
+4 -
+5 and
+6 nowhere
+7 to
+8 go
+
Note that we get 8 words this way since the hyphen is treated as a word.
+
If however we change the IFS variable again to include the hyphen as a delimiter we get a different result:
+
$ IFS="_-"
+$ printargs ${str// /_}
+1 all
+2 dressed
+3 up
+4
+5
+6 and
+7 nowhere
+8 to
+9 go
+
Here we have 9 words since the hyphen is now a delimiter. It might be useful to show the result of the substitution before word splitting:
Expansion is performed on the command line after it has been split into words. There are seven kinds of expansion performed: brace expansion, tilde expansion, parameter and variable expansion, command substitution, arithmetic expansion, word splitting, and pathname expansion.
+
The order of expansions is: brace expansion; tilde expansion, parameter and variable expansion, arithmetic expansion, and command substitution (done in a left-to-right fashion); word splitting; and pathname expansion.
+
On systems that can support it, there is an additional expansion available: process substitution. This is performed at the same time as tilde, parameter, variable, and arithmetic expansion and command substitution.
+
Only brace expansion, word splitting, and pathname expansion can change the number of words of the expansion; other expansions expand a single word to a single word. The only exceptions to this are the expansions of "$@" and "${name[@]}" as explained above (see PARAMETERS).
Process substitution is supported on systems that support named pipes (FIFOs) or the /dev/fd method of naming open files. It takes the form of <(list) or >(list). The process list is run with its input or output connected to a FIFO or some file in /dev/fd. The name of this file is passed as an argument to the current command as the result of the expansion. If the >(list) form is used, writing to the file will provide input for list. If the <(list) form is used, the file passed as an argument should be read to obtain the output of list.
+
When available, process substitution is performed simultaneously with parameter and variable expansion, command substitution, and arithmetic expansion.
+
Word Splitting
+
The shell scans the results of parameter expansion, command substitution, and arithmetic expansion that did not occur within double quotes for word splitting.
+
The shell treats each character of IFS as a delimiter, and splits the results of the other expansions into words using these characters as field terminators. If IFS is unset, or its value is exactly <space><tab><newline>, the default, then sequences of <space>, <tab>, and <newline> at the beginning and end of the results of the previous expansions are ignored, and any sequence of IFS characters not at the beginning or end serves to delimit words. If IFS has a value other than the default, then sequences of the whitespace characters space and tab are ignored at the beginning and end of the word, as long as the whitespace character is in the value of IFS (an IFS whitespace character). Any character in IFS that is not IFS whitespace, along with any adjacent IFS whitespace characters, delimits a field. A sequence of IFS whitespace characters is also treated as a delimiter. If the value of IFS is null, no word splitting occurs.
+
Explicit null arguments ("" or '') are retained. Unquoted implicit null arguments, resulting from the expansion of parameters that have no values, are removed. If a parameter with no value is expanded within double quotes, a null argument results and is retained.
+
Note that if no expansion occurs, no splitting is performed.
+
+
+
+
+
For some reason I said "post decrement" in the audio, where this is obviously a "post increment". Oops!↩
+
+
+
+
+
+
+
diff --git a/eps/hpr2060/hpr2060_centre.sed b/eps/hpr2060/hpr2060_centre.sed
new file mode 100755
index 0000000..48a327c
--- /dev/null
+++ b/eps/hpr2060/hpr2060_centre.sed
@@ -0,0 +1,23 @@
+#!/bin/sed -f
+
+# Put 80 spaces in the buffer
+1 {
+x
+s/^$/ /
+s/^.*$/&&&&&&&&/
+x
+}
+
+# del leading and trailing spaces
+y/\t/ /
+s/^ *//
+s/ *$//
+
+# add a newline and 80 spaces to end of line
+G
+
+# keep first 81 chars (80 + a newline)
+s/^\(.\{81\}\).*$/\1/
+
+# \2 matches half of the spaces, which are moved to the beginning
+s/^\(.*\)\n\(.*\)\2/\2\1/
diff --git a/eps/hpr2060/hpr2060_demo5.sed b/eps/hpr2060/hpr2060_demo5.sed
new file mode 100755
index 0000000..3460480
--- /dev/null
+++ b/eps/hpr2060/hpr2060_demo5.sed
@@ -0,0 +1,7 @@
+1c\
+------\
+This line has been censored\
+By the Department of Not Seeing Stuff\
+------
+
+3q
diff --git a/eps/hpr2060/hpr2060_full_shownotes.html b/eps/hpr2060/hpr2060_full_shownotes.html
new file mode 100755
index 0000000..b845a8a
--- /dev/null
+++ b/eps/hpr2060/hpr2060_full_shownotes.html
@@ -0,0 +1,487 @@
+
+
+
+
+
+
+
+ Introduction to sed - part 5 (HPR Show 2060)
+
+
+
+
+
+
+
+
+
+
Introduction to sed - part 5 (HPR Show 2060)
+
Dave Morriss
+
+
+
+
+
+
+
Table of Contents
+
+
+
Introduction
+
This episode is the last one in the "Introduction to sed" series.
+
In the last episode we looked at the full story of how sed works with the hold and pattern buffers. We looked at some of the commands that we had not yet seen and how they can be used to do more advanced processing using sed's buffers.
+
In this episode we will look at a selection of the remaining commands, which might be described as quite obscure (even very obscure). We will also look at some of the example sed scripts found in the GNU sed manual.
+
Commands
+
Finishing off less frequently used commands
+
We omitted a few commands in this group in the last episode. I will not cover everything in this category but there are some that might be useful, which we'll look at now.
+
The c command
+
This is one of the commands for inserting text in sed. The command is written as:
+
c\
+line1\
+line2
+
The c command itself must be followed by a backslash, as should all of the lines which follow, except the last. The backslashes stand for newlines.
+
The command can be preceded by any of the address types we saw in episode 2. The lines matching the address(es) are deleted and replaced by the line(s) associated with this command. If no addresses are given all lines are replaced.
+
Since the command deletes the pattern space a new cycle follows.
+
The c command can be used on the command line, but not very usefully. For example, it is not possible to follow it with any more sed commands and another -e option needs to be resorted to:
+
$ sed -e '1c\Line removed' -e '3q' sed_demo1.txt
+Line removed
+shows every weekday Monday through Friday. HPR has a long lineage going back to
+Radio FreeK America, Binary Revolution Radio & Infonomicon, and it is a direct
+
Also, only one line can be generated this way:
+
$ sed -e '1c\**Censored**\Do not read!' -e '3q' sed_demo1.txt
+**Censored**Do not read!
+shows every weekday Monday through Friday. HPR has a long lineage going back to
+Radio FreeK America, Binary Revolution Radio & Infonomicon, and it is a direct
+
However, escape characters can be used so the following example generates two lines as intended:
+
$ sed -e '1c\**Censored**\nDo not read!' -e '3q' sed_demo1.txt
+**Censored**
+Do not read!
+shows every weekday Monday through Friday. HPR has a long lineage going back to
+Radio FreeK America, Binary Revolution Radio & Infonomicon, and it is a direct
+
This feature is a GNU extension just for single-line additions.
+
The c command is best used in a file of sed commands. One has been prepared as demo5.sed which is available on the HPR website. The example below shows the file being listed with the nl command to show line numbers then it is used as a sed script and the results are shown:
+
$ nl -w2 -ba -s': ' demo5.sed
+1: 1c\
+2: ------\
+3: This line has been censored\
+4: By the Department of Not Seeing Stuff\
+5: ------
+6:
+7: 3q
+$ sed -f demo5.sed sed_demo1.txt
+------
+This line has been censored
+By the Department of Not Seeing Stuff
+------
+shows every weekday Monday through Friday. HPR has a long lineage going back to
+Radio FreeK America, Binary Revolution Radio & Infonomicon, and it is a direct
+
Of course, this could all be done on one line using '\n' sequences, as we saw above, but that is extremely GNU sed-specific.
+
The a command
+
This command is a GNU extension. It has the same structure of lines as the c command.
+
a\
+line1\
+line2
+
The command can be preceded by any of the address types we saw in episode 2. The lines matching the address(es) are processed as normal but are followed by the line(s) associated with this command at the end of the current cycle, or when the next input line is read. If no addresses are given all lines of processed by sed are followed by the line(s) of the a command.
+
If using the one-line form (as discussed with the c command) escape sequences like '\n' are allowed.
+
$ sed -e '1a\Chickens' -e '1q' sed_demo1.txt
+Hacker Public Radio (HPR) is an Internet Radio show (podcast) that releases
+Chickens
+
Here the a command only applies to the first line, after which a line is added. The second -e expression stops processing after line 1, so we only see one original line and one added line.
+
The following example adds a line containing just a hyphen after each line of the file, but the second -e expression stops processing after line 3 so we only see three lines of the file:
+
$ sed -e 'a\-' -e '3q' sed_demo1.txt
+Hacker Public Radio (HPR) is an Internet Radio show (podcast) that releases
+-
+shows every weekday Monday through Friday. HPR has a long lineage going back to
+-
+Radio FreeK America, Binary Revolution Radio & Infonomicon, and it is a direct
+-
+
The i command
+
This command is a GNU extension. It has the same structure of lines as the c and a commands.
+
i\
+line1\
+line2
+
The command can be preceded by any of the address types we saw in episode 2. The lines matching the address(es) are preceded by the line(s) associated with this command. If no addresses are given all lines of processed by sed are preceded by the line(s) of the i command.
+
If using the one-line form (as discussed with the c command) escape sequences like '\n' are allowed.
+
The following example adds a line containing just a hyphen before each line of the file, but the second -e expression stops processing after line 3 so we only see three lines of the file:
+
$ sed -e 'i\-' -e '3q' sed_demo1.txt
+-
+Hacker Public Radio (HPR) is an Internet Radio show (podcast) that releases
+-
+shows every weekday Monday through Friday. HPR has a long lineage going back to
+-
+Radio FreeK America, Binary Revolution Radio & Infonomicon, and it is a direct
+
This example is similar to the preceding one but it adds an open square bracket before each line and a close square bracket after it. It uses the i and a commands to do this.
+
$ sed -e 'i\[' -e 'a\]' -e '3q' sed_demo1.txt
+[
+Hacker Public Radio (HPR) is an Internet Radio show (podcast) that releases
+]
+[
+shows every weekday Monday through Friday. HPR has a long lineage going back to
+]
+[
+Radio FreeK America, Binary Revolution Radio & Infonomicon, and it is a direct
+]
In most cases, use of these commands indicates that you are probably better off programming in something like awk or Perl. But occasionally one is committed to sticking with sed, and these commands can enable one to write quite convoluted scripts.
+
+
I am including them in this episode because they will help with understanding some of the examples from the GNU Manual later on.
+
Defining a label
+
It is possible to create simple loops within sed but only by branching to a label conditionally or unconditionally. The label itself consists of a colon and a character sequence:
+
: label
+
The label cannot be associated with an address (it makes no sense), and it serves no other purpose than to act as a point for transfer of execution.
+
The b command
+
This command takes the form:
+
b label
+
It causes an unconditional branch to a label. The label may be omitted in which case the b command causes the next cycle to start.
It causes a conditional branch to the label. This happens only if there has been a successful substitution (s command) since the last input line was read or conditional branch was taken. The label may be omitted in which case the t command causes the next cycle to start.
This is one of the commands which are specific to GNU sed. For the full list refer to the GNU Manual.
+
The F command
+
This command prints out the file name of the current input file (with a trailing newline).
+
This example contains a command group that is obeyed on line 1 of the input. The commands are an F which prints the filename, and a q which stops processing. Because sed is run in the default "read and print" mode the first line is printed:
+
$ sed -e '1{F;q}' sed_demo1.txt
+sed_demo1.txt
+Hacker Public Radio (HPR) is an Internet Radio show (podcast) that releases
+
Examples from the GNU manual
+
Centering lines
+
This example is from the GNU manual and centres all lines of a file in a width of 80 columns.
+
The script, called centre.sed, has been made available on the HPR site, and is reproduced below with line numbers for easy reference. Note that the path to sed has been changed from the original since many Linux distributions store it in /bin rather than /usr/bin.
+
Note that option -f is needed to make sed read the rest of the file.
#!/bin/sed -f
+
+# Put 80 spaces in the buffer
+1 {
+x
+s/^$/ /
+s/^.*$/&&&&&&&&/
+x
+}
+
+# del leading and trailing spaces
+y/\t/ /
+s/^*//
+s/ *$//
+
+# add a newline and 80 spaces to end of line
+G
+
+# keep first 81 chars (80 + a newline)
+s/^\(.\{81\}\).*$/\1/
+
+# \2 matches half of the spaces, which are moved to the beginning
+s/^\(.*\)\n\(.*\)\2/\2\1/
+
+
Lines 4-9: This group of commands is executed on line 1 of the input stream.
+
+
Line 5: The x command exchanges the pattern space and the hold space. There will be data in the pattern space which will be stored in the hold space, but the hold space will have been empty originally, so now the pattern space is empty.
+
Line 6: Replaces the empty pattern space by 10 spaces.
+
Line 7: Replaces the 10 spaces in the pattern space by itself 8 times, thereby creating 80 spaces.
+
Line 8: Exchanges the buffers again so that the 80 spaces are stored in the hold space and the pattern space is back as it was.
+
+
Line 12: In the GNU manual the command is written as y/tab/ / but the word tab is meant to signify a tab character, since it is invisible. The copy used here has used the '\t' metacharacter (or escape sequence), though this is GNU-specific. The y command replaces all tabs by spaces.
+
Line 13: This s command removes leading spaces.
+
Line 14: This s command removes trailing spaces.
+
Line 17: The G command appends the contents of the hold space to the pattern space, preceded by a newline. The contents of the hold space are not changed. Remember that the hold space contains 80 spaces.
+
Line 20: This s command replaces the pattern space by the first 81 characters, so this should consist of the original line, the newline and some of the newly added spaces.
+
Line 23: This s command matches the line up to the newline (using grouping), and enough of the spaces after the newline which can be split into two equal parts. Then half of the spaces (\2) are placed at the beginning of the line, centring it.
+
+
This example is built for centring in 80 columns and would need a change to the s command on line 20 to use a different width. It will also truncate lines longer than 80 characters. However, it is a useful demonstration.
+
Reverse lines of files
+
This example is from the GNU manual. It emulates the Unix command tac which is a reverse version of cat. The example is quite well described in the manual, but it seemed desirable to look at it in even more detail.
+
The script, called tac.sed, has been made available on the HPR site, and is reproduced below with line numbers for easy reference. Note that the path to sed has been changed as before.
+
Note that in addition to option -f we also have -n to suppress auto-printing.
+
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+
#!/bin/sed -nf
+
+# reverse all lines of input, i.e. first line became last, ...
+
+# from the second line, the buffer (which contains all previous lines)
+# is *appended* to current line, so, the order will be reversed
+1!G
+
+# on the last line we're done -- print everything
+$p
+
+# store everything on the buffer again
+h
+
+
Line 1: This is the usual crunch-bang or hash-bang line that is found on executable sed scripts.
+
Line 7: This is an address and a single command. The address is a line number, 1, but is negated so that it refers to all other lines. The G command appends a newline to the contents of the pattern space, and then appends the contents of the hold space to that of the pattern space.
+
Line 10: This is another command controlled by an address, a $, as we saw in episode 3. The command is p which prints the pattern space. So, when the last input line has been reached the entire accumulated pattern space is printed.
+
Line 13: The h command replaces the contents of the hold space with the contents of the pattern space. This is done for every input line since it has no address.
+
+
So, the algorithm used here is:
+
+
The first line read by sed does not trigger anything other than the h command on line 13 of the script. This means that the line is stored in the hold space.
+
The second and subsequent input lines trigger the G command on line 7 of the script. For input line 2, for example, this command appends a newline to the pattern space, then appends input line 1 (previous stored in the hold space) to it. Then the h command on line 13 is invoked and the pattern space (in the order line 2/line 1) is stored in the hold space again. In this way, each line is appended to the already accumulated lines in reverse order.
+
When the last line is read the G command on line 7 will be triggered as before, appending the hold space contents again, with the result that the pattern space now holds the entire file in reverse order. Now, however, the p command on line 10 will trigger and the result of reversing everything will be printed.
+
+
It bothers me slightly that the h command on line 13 will be run again after printing everything, but its effects will not be seen. I would have wanted to make line 10 into:
+
$ {
+ p
+ q
+}
+
This would stop sed after printing. However, this is probably just obsessive thinking on my part!
+
Reverse characters of lines
+
This example is from the GNU manual where sed is used to emulate the rev command. The script, called reverse_characters.sed, has been made available on the HPR site, and is reproduced below with line numbers for easy reference. Note that the path to sed has been changed from the original as before. I have also changed line 6, replacing implicit newlines by '\n' sequences, which might mean the modified script will not run on non GNU sed versions.
#!/bin/sed -f
+
+/../!b
+
+# Reverse a line. Begin embedding the line between two newlines
+s/^.*$/\n&\n/
+
+# Move first character at the end. The regexp matches until
+# there are zero or one characters between the markers
+tx
+:x
+s/\(\n.\)\(.*\)\(.\n\)/\3\2\1/
+tx
+
+# Remove the newline markers
+s/\n//g
+
+
Line 1: This is the usual hash-bang line that is found on executable sed scripts. This one does not suppress auto-printing.
+
Line 3: Here the b command is invoked on any line that does not have two characters in it. The b command normally invokes an unconditional branch to a label, but if the label is omitted it triggers a new cycle. The effect here is that any line with one character or less is simply printed and the rest of the commands are ignored. There is no point in reversing such a line!
+
Line 6: This s command replaces the current line by itself (&) with a newline at the beginning and the end.
+
Line 10: This is documented in the GNU manual as: "This is often needed to reset the flag that is tested by the t command." I have tried removing it and the script still works. Other versions of sed may not however.
+
Line 11: This is a label 'x' for branch commands.
+
Line 12: This s command uses the newlines added on line 6 to determine which characters to swap. It uses groups to indicate the character after the first newline and before the second one, and groups the rest of the line, allowing that part to be zero or more characters long. It replaces what it finds with a reversed version of the first and third groups. This also ensures that the moved characters end up on the other side of the newlines. Note that this only finds the characters inside the newlines and swaps two. The rest of the line before the first newline and after the second are left alone.
+
Line 13: The t command is a conditional branch to label 'x'. It will only branch if the s command on line 12 performs a substitution. In this way lines 11-13 form a loop to repeat the action on line 12 until the regular expression stops matching.
+
Line 16: Having reversed the line the newlines can be removed, and this s command does this, and the reversed line can then be printed before the next cycle begins.
+
+
The processing of a line can be visualised by using the l command. I have provided another version of this script containing such commands to show what is happening:
#!/bin/sed -f
+
+# reverse_characters_debug.sed
+#
+# A version which prints what it's doing to help understand the process
+
+/../!b
+
+# Reverse a line. Begin embedding the line between two newlines
+s/^.*$/\n&\n/
+
+# List the line to see what the command above did to it
+l
+
+# Move first character at the end. The regexp matches until
+# there are zero or one characters between the markers
+tx
+:x
+s/\(\n.\)\(.*\)\(.\n\)/\3\2\1/
+# List the result of each loop iteration
+l
+tx
+
+# Remove the newline markers
+s/\n//g
+
It is available on the HPR site as reverse_characters_debug.sed and you can examine it yourself. Running it on a simple string gives output as follows:
The first line of the output shows the original line being embedded between two newlines.
+
The second line shows the 'a' and 'z' being swapped as discussed in the explanation. Then successive lines show further swaps based on the positions of the two newlines.
+
The auto-printed last line shows the final result after all swaps have been carried out.
+
My answer to the quiz in the last episode
+
As promised here is my answer to the quiz I set in episode 4. The request was to use sed_demo1.txt, taking the first line and converting it to Pig Latin. The brief rules were:
+
+
Take the first letter of each word and place it at the end, followed by 'ay'. Thus 'pig' becomes 'igpay' and 'latin' becomes 'atinlay'.
+
Skip 1- and 2-letter words, since 'a' -> 'aay' is not wanted.
+
Do not bother about capitals.
+
+
Here's what I did:
+
$ sed -ne '1s/\(\b\w\)\(\w\{2,\}\)/\2\1ay/gp' sed_demo1.txt
+ackerHay ublicPay adioRay (PRHay) is an nternetIay adioRay howsay (odcastpay) hattay eleasesray
+
Sadly, there were no winners of this little competition because there were no entries. It's probably just as well that I am finishing this series here because I think I probably sent everyone to sleep several episodes back!!
+
+
diff --git a/eps/hpr2060/hpr2060_reverse_characters.sed b/eps/hpr2060/hpr2060_reverse_characters.sed
new file mode 100755
index 0000000..471b855
--- /dev/null
+++ b/eps/hpr2060/hpr2060_reverse_characters.sed
@@ -0,0 +1,16 @@
+#!/bin/sed -f
+
+/../! b
+
+# Reverse a line. Begin embedding the line between two newlines
+s/^.*$/\n&\n/
+
+# Move first character at the end. The regexp matches until
+# there are zero or one characters between the markers
+tx
+:x
+s/\(\n.\)\(.*\)\(.\n\)/\3\2\1/
+tx
+
+# Remove the newline markers
+s/\n//g
diff --git a/eps/hpr2060/hpr2060_reverse_characters_debug.sed b/eps/hpr2060/hpr2060_reverse_characters_debug.sed
new file mode 100755
index 0000000..8b36c58
--- /dev/null
+++ b/eps/hpr2060/hpr2060_reverse_characters_debug.sed
@@ -0,0 +1,25 @@
+#!/bin/sed -f
+
+# reverse_characters_debug.sed
+#
+# A version which prints what it's doing to help understand the process
+
+/../! b
+
+# Reverse a line. Begin embedding the line between two newlines
+s/^.*$/\n&\n/
+
+# List the line to see what the command above did to it
+l
+
+# Move first character at the end. The regexp matches until
+# there are zero or one characters between the markers
+tx
+:x
+s/\(\n.\)\(.*\)\(.\n\)/\3\2\1/
+# List the result of each loop iteration
+l
+tx
+
+# Remove the newline markers
+s/\n//g
diff --git a/eps/hpr2060/hpr2060_tac.sed b/eps/hpr2060/hpr2060_tac.sed
new file mode 100755
index 0000000..3c655d2
--- /dev/null
+++ b/eps/hpr2060/hpr2060_tac.sed
@@ -0,0 +1,13 @@
+#!/bin/sed -nf
+
+# reverse all lines of input, i.e. first line became last, ...
+
+# from the second line, the buffer (which contains all previous lines)
+# is *appended* to current line, so, the order will be reversed
+1! G
+
+# on the last line we're done -- print everything
+$ p
+
+# store everything on the buffer again
+h
diff --git a/eps/hpr2073/hpr2073_full_shownotes.html b/eps/hpr2073/hpr2073_full_shownotes.html
new file mode 100755
index 0000000..e045d41
--- /dev/null
+++ b/eps/hpr2073/hpr2073_full_shownotes.html
@@ -0,0 +1,114 @@
+
+
+
+
+
+
+
+ The power of GNU Readline - part 1 (HPR Show 2073)
+
+
+
+
+
+
+
+
+
The power of GNU Readline - part 1 (HPR Show 2073)
+
Dave Morriss
+
+
+
+
+
+
+
Table of Contents
+
+
+
GNU Readline
+
We all use GNU Readline if we we use the CLI in Linux because it manages input, line editing and command history in Bash and in many tools.
+
I have been using Unix and later Linux since the 1980's, and gradually learnt how to do things like jump to the start or the end of the line, delete a character backwards up to a space, or delete the entire line.
+
I think that learning GNU Readline is worthwhile since it contains a lot more features than what I just described. I thought I would do a few episodes on HPR to introduce some of what I consider to be the most useful features.
+
I want to keep the episodes short since this is a dry subject, and, if you are anything like me, you can't take in more than a few key sequences at a time.
+
The source of my information is the GNU Readline Manual. This is very well written, if a little overwhelming.
+
Keys and Notation
+
Most of the features in GNU Readline are invoked by multi-key sequences. These involve the Control key and the so-called Meta key. The Control key is usually marked Ctrl on the keyboard. The Meta key is the key marked Alt.
+
The notation used in the GNU Readline manual is C-k for 'Control-k', meaning the character produced when the k key is pressed while the Control key is being held down.
+
For the Meta key the notation M-k (Meta-k) means the character produced when the k key is pressed while the Meta key is being held down.
+
If your keyboard does not have a Meta key then the same result can be obtained by pressing the Esc key, releasing it, then pressing the k key.
+
In some instances both the Control and the Meta key might be used, so M-C-k would mean the character produced when the k key is pressed while the Meta and Control keys are being held down.
+
Commands you probably already know
+
+
C-b
+
Move back one character. This is the same as the left arrow key if you have one.
+
+
C-f
+
Move forward one character. This is the same as the right arrow key if you have one.
+
+
Backspace
+
Delete the character to the left of the cursor.
+
+
C-d
+
Delete the character underneath the cursor.
+
+
DEL
+
Depends on your setup may be the same as Backspace or C-d. In my case (Debian Testing with Xfce) it's the same as C-d.
+
+
+
Commands you might not know
+
+
C-_ (or C-xC-u)
+
Undo the last editing command. Can undo all the way back to the blank line you started with. Remember the '_' underscore is usually on the same key as the '-' hyphen, so you'll need to use Control, Shift and underscore.
+
+
C-a
+
Move to the start of the line. This is the same as the Home key if you have one.
+
+
C-e
+
Move to the end of the line. This is the same as the End key if you have one.
+
+
M-f
+
Move forward a word. A word is what you would expect, a sequence of letters and numbers.
+
+
M-b
+
Move backward a word.
+
+
C-l
+
Clear the screen, reprinting the current line at the top.
+
+
+
Example
+
This is a little difficult to demonstrate on an audio podcast, but hopefully the description will be understandable.
+
+
In a terminal type: The quick brown fox
+
+
After the 'x' of 'fox' press M-b. The cursor moves to the 'f' of 'fox'.
+
Press M-b again. The cursor moves to the 'b' of 'brown'.
+
Press C-d. The 'b' is deleted and the cursor is on the 'r' of 'brown'.
+
Press C-_. The 'b' is restored, but the cursor is on the 'r' still.
My daughter is a student at university and uses her laptop with a headset most of the time. She shares a flat with a friend and they are both studying, so they don’t want to annoy each other with noise.
+
The headset my daughter uses has a very long cable and earlier this year she tripped over it. The microphone jack was OK, but the headphone jack snapped off at the first ring and the remaining piece was left in the socket1.
+
Here’s the “before” picture:
+
1. Unbroken jack plugs
+
She actually managed to get it working a little by using ‘white tack’ to hold the plug in contact with the broken bit. You can see the remnants on the broken plug.
+
2. Broken jack plug
+
She bought a new headset, and the problem of the unusable socket was circumvented by buying a USB DAC (as recommended by Jon Kulp, see Links), which did the job just fine.
+
3. Using a USB DAC
+
We’d asked around to find out how much a fix would cost. It would need the laptop to be stripped down, the jack plug unit de-soldered and removed, and a new one fitted in its place. This was going to cost up to £150 it turned out.
+
The Fix
+
I came up with a plan for getting the broken piece of plug out of the socket. We had tried forceps of various sizes, but there was nothing to grab on to. We’d toyed with using glue but hadn’t been sure of success.
+
My plan was to drill a hole through the broken piece and screw a fine self-tapping screw into it and haul it out.
+
My daughter is now back from university for the vacation, but was away for the weekend, so I decided to try while she wasn’t using the laptop - with her permission of course.
+
I set the laptop into a good position using my portable workbench. The laptop was on its edge, with a chair underneath to support it. The jaws of the work bench held it gently, with bits of cardboard to protect its surface. I used a spirit level to get it aligned properly.
+
4. Laptop secured for drilling
+
I tried to get a closeup of the broken piece while I had the laptop set up, which you can see here:
+
5. Closeup of socket
+
I have a Dremel 4000 kit and a Dremel Workstation (see Links) which is like a small drill press. I set this up on the workbench, clamped to it securely, so that the Dremel could be sited above the jack socket.
+
6. Dremel stand set up
+
I had recently bought some metalwork drills for the Dremel, and I planned to use one of them:
+
7. Metal drill set
+
Unfortunately, the bit I wanted to use, the 1.6mm didn’t fit my Dremel. The two collets I have for it are too large:
+
8. Collets
+
However, Duct Tape came to the rescue and made the drill bit fit very snugly in the larger collet.
+
9. Duct tape cures everything
+
So, time to set up the Dremel and start drilling:
+
10. Dremel ready to go
+
11. Some drilling has happened
+
I drilled carefully for a while, cleaned up and tried a self-tapping screw, but it didn’t anchor properly. The screw was too big. I tried a little cup hook, and it bit into the plug. I pulled and out came … half the broken plug! It had snapped at the second plastic ring.
+
12. Trying to hook out plug
+
13. Got part of it
+
I carried on drilling, being very careful about the depth, but the screw trick did not work for the remaining piece. I was about to give up when I thought of using a small screwdriver. This time I had success:
+
14. Got it all
+
After cleaning out the hole with a vacuum cleaner the socket seems fine and accepts a jack plug again.
+
Result
+
I didn’t know if the laptop had been fixed when I recorded the audio, but my daughter tested her headphones under Windows when she got home and … yes! They worked fine!
+
Conclusion
+
Sometimes a totally mad scheme actually works! I would never have tried this without a drill-stand, though I imagine that at a pinch something like one could be contrived.
+
Also, final point, try not to trip over your headphone leads!!
+
Links
+
Note: The Amazon links below are for information. I have no financial involvement with Amazon; these are not Affiliate links.
For some reason I said it was the microphone plug that had been snapped, but it was the headphone one, as you can see from the pictures.↩
+
+
+
+
diff --git a/eps/hpr2081/hpr2081_img_001.png b/eps/hpr2081/hpr2081_img_001.png
new file mode 100755
index 0000000..6d354bb
Binary files /dev/null and b/eps/hpr2081/hpr2081_img_001.png differ
diff --git a/eps/hpr2081/hpr2081_img_002.png b/eps/hpr2081/hpr2081_img_002.png
new file mode 100755
index 0000000..a3b9607
Binary files /dev/null and b/eps/hpr2081/hpr2081_img_002.png differ
diff --git a/eps/hpr2081/hpr2081_img_003.png b/eps/hpr2081/hpr2081_img_003.png
new file mode 100755
index 0000000..b879253
Binary files /dev/null and b/eps/hpr2081/hpr2081_img_003.png differ
diff --git a/eps/hpr2081/hpr2081_img_004.png b/eps/hpr2081/hpr2081_img_004.png
new file mode 100755
index 0000000..e1dd95a
Binary files /dev/null and b/eps/hpr2081/hpr2081_img_004.png differ
diff --git a/eps/hpr2081/hpr2081_img_005.png b/eps/hpr2081/hpr2081_img_005.png
new file mode 100755
index 0000000..db5f2e5
Binary files /dev/null and b/eps/hpr2081/hpr2081_img_005.png differ
diff --git a/eps/hpr2081/hpr2081_img_006.png b/eps/hpr2081/hpr2081_img_006.png
new file mode 100755
index 0000000..088b9c7
Binary files /dev/null and b/eps/hpr2081/hpr2081_img_006.png differ
diff --git a/eps/hpr2081/hpr2081_img_007.png b/eps/hpr2081/hpr2081_img_007.png
new file mode 100755
index 0000000..3b23d1e
Binary files /dev/null and b/eps/hpr2081/hpr2081_img_007.png differ
diff --git a/eps/hpr2081/hpr2081_img_008.png b/eps/hpr2081/hpr2081_img_008.png
new file mode 100755
index 0000000..967cefa
Binary files /dev/null and b/eps/hpr2081/hpr2081_img_008.png differ
diff --git a/eps/hpr2081/hpr2081_img_009.png b/eps/hpr2081/hpr2081_img_009.png
new file mode 100755
index 0000000..7bfca39
Binary files /dev/null and b/eps/hpr2081/hpr2081_img_009.png differ
diff --git a/eps/hpr2081/hpr2081_img_010.png b/eps/hpr2081/hpr2081_img_010.png
new file mode 100755
index 0000000..00495cc
Binary files /dev/null and b/eps/hpr2081/hpr2081_img_010.png differ
diff --git a/eps/hpr2081/hpr2081_img_011.png b/eps/hpr2081/hpr2081_img_011.png
new file mode 100755
index 0000000..573690f
Binary files /dev/null and b/eps/hpr2081/hpr2081_img_011.png differ
diff --git a/eps/hpr2081/hpr2081_img_012.png b/eps/hpr2081/hpr2081_img_012.png
new file mode 100755
index 0000000..51f1540
Binary files /dev/null and b/eps/hpr2081/hpr2081_img_012.png differ
diff --git a/eps/hpr2081/hpr2081_img_013.png b/eps/hpr2081/hpr2081_img_013.png
new file mode 100755
index 0000000..d2c954d
Binary files /dev/null and b/eps/hpr2081/hpr2081_img_013.png differ
diff --git a/eps/hpr2081/hpr2081_img_014.png b/eps/hpr2081/hpr2081_img_014.png
new file mode 100755
index 0000000..180f745
Binary files /dev/null and b/eps/hpr2081/hpr2081_img_014.png differ
diff --git a/eps/hpr2093/hpr2093_full_shownotes.html b/eps/hpr2093/hpr2093_full_shownotes.html
new file mode 100755
index 0000000..a725ff7
--- /dev/null
+++ b/eps/hpr2093/hpr2093_full_shownotes.html
@@ -0,0 +1,150 @@
+
+
+
+
+
+
+
+ GNU Health (HPR Show 2093)
+
+
+
+
+
+
+
+
+
GNU Health (HPR Show 2093)
+
Dave Morriss
+
+
+
+
+
+
+
Table of Contents
+
+
+
Introduction
+
This is an interview with Dr Tom Kane and his student Euan Livingstone in Tom’s office at Edinburgh Napier University (ENU) on 2016-07-06.
+
Tom and Euan are investigating ways of running GNU Health for evaluation and demonstration purposes, using multiple Raspberry Pi systems and an Intel NUC. In particular they want to evaluate the conformity of interoperability (FHIR) standards, and are trying to build a reference implementation for decision makers who are procuring a Health and Hospital Information System.
+
In the interview Tom used some terminology that I have provided links for here and at the end:
I had forgotten where I’d seen Luis Falcón, originator of GNU Health, being interviewed. It was on FLOSS Weekly, as linked below.
+
Equipment
+
Pi Tower
+
The Raspberry Pi’s are able to run GNU Health itself.
+
The Raspberry Pi 3’s are not all in use all of the time yet. This is partly because the project is still in the development stage and partly because there is some doubt about the 10-port hub’s ability to power them all at the same time.
+
The Pi’s are all using Ethernet connections at the moment, though the built-in Wi-Fi is a possibility.
+
The topmost Pi is connected to a small SSD for storage purposes.
+
+1. The tower of ten Raspberry Pi 3 systems
+
+2. The tower of ten Raspberry Pi 3 systems with the 10-port hub
+
Some of the Pi’s are mounted on case tops which had to be drilled out for the nylon stand-offs fixed to the boards. The original metal stand-offs had screws top and bottom, but removal of the screw heads allowed them to be joined together.
+
+3. The tower made from modified individual cases
+
+4. Close-up of the tower
+
+5. The 10-port hub may be a little under-powered for 10 Pi 3’s
+
Intel NUC
+
The NUC is used to run VMWare VMs running some of the support systems like a database, and the PACS image library. Further details below of what is is being run.
+
+6. An Intel NUC with an i7 processor, 16GB RAM and an SSD
+
Software
+
The NUC is being used to run virtual machines for setting up components needed to support GNU Health, like PostgreSQL, a PACS server and a LIMS server. Some of these have already been migrated to Raspberry Pi’s as shown below.
+
Virtual Machines
+
+
4 x GNU Health application running on the Tryton Server, installed on Ubuntu 16
+
1 x Shared database on a PostgreSQL Server, installed on Ubuntu 16
+
1 x Orthanc PACS Server, installed on Ubuntu 16
+
1 x BikaLIMS LIMS Server, installed on Ubuntu 16
+
+
Raspberry Pis
+
+
4 x GNU Health application running on the Tryton Server, installed on Raspbian
+
1 x GNU Health application running on the Tryton Server and a database on a PostgreSQL Server, installed on Raspbian
+
1 x OrthancPi PACS Server, installed on Raspbian
+
+
Screen Images
+
+1. GNU Health Web Interface
+
+2. Laboratory Information Management System
+
+3. Orthanc DICOM server for medical imaging
+
+4. Orthanc Web Viewer example image
+
Thanks
+
Thanks to Tom and Euan for taking the time to talk to me.
+
+
diff --git a/eps/hpr2093/hpr2093_img_00.png b/eps/hpr2093/hpr2093_img_00.png
new file mode 100755
index 0000000..6fb754a
Binary files /dev/null and b/eps/hpr2093/hpr2093_img_00.png differ
diff --git a/eps/hpr2093/hpr2093_img_01.png b/eps/hpr2093/hpr2093_img_01.png
new file mode 100755
index 0000000..bffef56
Binary files /dev/null and b/eps/hpr2093/hpr2093_img_01.png differ
diff --git a/eps/hpr2093/hpr2093_img_02.png b/eps/hpr2093/hpr2093_img_02.png
new file mode 100755
index 0000000..8e8bb92
Binary files /dev/null and b/eps/hpr2093/hpr2093_img_02.png differ
diff --git a/eps/hpr2093/hpr2093_img_03.png b/eps/hpr2093/hpr2093_img_03.png
new file mode 100755
index 0000000..5323023
Binary files /dev/null and b/eps/hpr2093/hpr2093_img_03.png differ
diff --git a/eps/hpr2093/hpr2093_img_04.png b/eps/hpr2093/hpr2093_img_04.png
new file mode 100755
index 0000000..22c2b6f
Binary files /dev/null and b/eps/hpr2093/hpr2093_img_04.png differ
diff --git a/eps/hpr2093/hpr2093_img_05.png b/eps/hpr2093/hpr2093_img_05.png
new file mode 100755
index 0000000..4e55e59
Binary files /dev/null and b/eps/hpr2093/hpr2093_img_05.png differ
diff --git a/eps/hpr2093/hpr2093_img_06.png b/eps/hpr2093/hpr2093_img_06.png
new file mode 100755
index 0000000..88d6015
Binary files /dev/null and b/eps/hpr2093/hpr2093_img_06.png differ
diff --git a/eps/hpr2093/hpr2093_img_07.png b/eps/hpr2093/hpr2093_img_07.png
new file mode 100755
index 0000000..791d4fc
Binary files /dev/null and b/eps/hpr2093/hpr2093_img_07.png differ
diff --git a/eps/hpr2093/hpr2093_img_08.png b/eps/hpr2093/hpr2093_img_08.png
new file mode 100755
index 0000000..917c576
Binary files /dev/null and b/eps/hpr2093/hpr2093_img_08.png differ
diff --git a/eps/hpr2093/hpr2093_img_09.png b/eps/hpr2093/hpr2093_img_09.png
new file mode 100755
index 0000000..677da7f
Binary files /dev/null and b/eps/hpr2093/hpr2093_img_09.png differ
diff --git a/eps/hpr2096/hpr2096_full_shownotes.html b/eps/hpr2096/hpr2096_full_shownotes.html
new file mode 100755
index 0000000..ece9cc2
--- /dev/null
+++ b/eps/hpr2096/hpr2096_full_shownotes.html
@@ -0,0 +1,363 @@
+
+
+
+
+
+
+
+ Useful Bash functions - part 2 (HPR Show 2096)
+
+
+
+
+
+
+
+
+
+
Useful Bash functions - part 2 (HPR Show 2096)
+
Dave Morriss
+
+
+
+
+
+
+
Table of Contents
+
+
+
Overview
+
This is the second show about Bash functions. In this one I revisit the yes_no function from the last episode and deal with some of the deficiencies of that version.
+
As before it would be interesting to receive feedback on these versions of the function and would be great if other Bash users contributed ideas of their own.
+
The yes_no function revisited
+
In the last episode (1757, released 28 April 2015) where I demonstrated some Bash functions I use, I talked about my yes_no function. It is called with a question in the form of a prompt string and an optional default answer and returns a Bash true or false result so that it can be used when making choices in scripts.
+
When run, the version I talked about placed the default value on the line after the prompt. This had to be deleted if the user did not want to accept the default, and feedback showed that this was probably a poor design.
+
Since then I have redesigned this function. I have two versions which I am talking about in this episode.
+
The mark 2 version
+
As before in episode 1757 this is a function that can be used to return a true/false result in an if statement. The main differences are that it can generate part of the prompt automatically, and it doesn’t show the default on the command line.
+
So, invoking it thus:
+
if ! yes_no_mk2 'Do you want to continue? %s ' 'N'; then
+ return
+fi
+
results in the prompt:
+
Do you want to continue? [y/N]
+
The function has replaced %s with the string [y/N] which is a convention you will often see in command line tools. The square brackets hold the two possible responses with the default one being capitalised. So, this one shows the responses should be ‘y’ or ‘n’ but that if nothing is typed it is taken as being ‘n’.
#=== FUNCTION ================================================================
+# NAME: yes_no_mk2
+# DESCRIPTION: Read a Yes or No response from STDIN and return a suitable
+# numeric value
+# PARAMETERS: 1 - Prompt string for the read
+# 2 - Default value (optional)
+# RETURNS: 0 for a response of Y or YES, 1 otherwise
+#===============================================================================
+yes_no_mk2 (){
+ localprompt="${1:?Usage: yes_no prompt [default]}"
+ localdefault="${2^^}"
+ localansres
+
+ if [[$prompt =~ %s ]]; then
+ if [[-n$default ]]; then
+ default=${default:0:1}
+ case"$default" in
+ Y)printf -v prompt "$prompt""[Y/n]";;
+ N) printf -v prompt "$prompt""[y/N]";;
+ *) echo"Error: ${FUNCNAME[0]}: Line ${BASH_LINENO[0]}: Default must be 'Y' or 'N'"
+ exit 1
+ ;;
+ esac
+ else
+ echo"Error: ${FUNCNAME[0]}: Line ${BASH_LINENO[0]}: Default required"
+ exit 1
+ fi
+ fi
+
+ #
+ # Read and handle CTRL-D (EOF)
+ #
+ read -e -p "$prompt"ans
+ res="$?"
+ if [[$res-ne 0 ]]; then
+ echo"Read aborted"
+ return 1
+ fi
+
+ [-z"$ans" ]&&ans="$default"
+ if [[${ans^^} =~ ^Y(E|ES)?$ ]]; then
+ return 0
+ else
+ return 1
+ fi
+}
+
+
Lines 10-12 define variables to hold the arguments and other things. Note that line 11 ensures the default (which should be ‘y’ or ‘n’) is in upper case.
+
Lines 14-28 deal with the prompt string.
+
+
The presence of ‘%s’ is checked on line 14 using a Bash regular expression.
+
If this is present then there needs to be a default, and the test on line 15 checks whether the default variable is non-empty using the -n operator.
+
If it is not empty then just the first character is taken on line 16.
+
Lines 17-23 are a case statement that takes specific action based on the contents of the default variable.
+
+
If the contents are ‘Y’ then the string ‘[Y/n]’ is substituted into the prompt using a printf command.
+
An ‘N’ produces ‘[y/N]’ the same way.
+
If it is neither then the function generates an error message and exits the entire script. Note that ‘${FUNCNAME[0]}’ returns the name of the function in a general way, and ‘${BASH_LINENO[0]}’ contains the current line number within the Bash script.
+
+
Lines 25-26 deal with the case where the default variable is empty. That’s an error because the prompt variable contains the %s substitution point, so the function aborts.
+
+
Line 33 performs the read with the prompt, collecting what was typed in variable ans.
+
Line 34 saves the result from the read in case it was aborted with CTRL-D.
+
Lines 35-38 will cause the script to return a false result if CTRL-D was pressed.
+
Line 40 replaces ans with the default value if it is empty.
+
Lines 41-45 check the upper case version of ans, comparing it using a regular expression to see if it is ‘Y’, ‘YE’ or ‘YES’. If it is true is returned, and if it isn’t then the function returns false.
+
+
The mark 3 version
+
The problem with the mark 2 version is that it treats an answer that is not ‘Y’ as if it is ‘N’. I developed mark 3 to compensate for this. It is essentially the same except that it looks for YES and NO (and shorter forms) and rejects anything else.
#=== FUNCTION ================================================================
+# NAME: yes_no_mk3
+# DESCRIPTION: Read a Yes or No response from STDIN (only these values are
+# accepted) and return a suitable numeric value.
+# PARAMETERS: 1 - Prompt string for the read
+# 2 - Default value (optional)
+# RETURNS: 0 for a response of Y or YES, 1 otherwise
+#===============================================================================
+yes_no_mk3 (){
+ localprompt="${1:?Usage: yes_no prompt [default]}"
+ localdefault="${2^^}"
+ localansres
+
+ if [[$prompt =~ %s ]]; then
+ if [[-n$default ]]; then
+ default=${default:0:1}
+ case"$default" in
+ Y)printf -v prompt "$prompt""[Y/n]";;
+ N) printf -v prompt "$prompt""[y/N]";;
+ *) echo"Error: ${FUNCNAME[0]} @ line ${BASH_LINENO[0]}: Default must be 'Y' or 'N'"
+ exit 1
+ ;;
+ esac
+ else
+ echo"Error: ${FUNCNAME[0]} @ line ${BASH_LINENO[0]}: Default required"
+ exit 1
+ fi
+ fi
+
+ #
+ # Loop until a valid input is received
+ #
+ whiletrue;do
+ #
+ # Read and handle CTRL-D (EOF)
+ #
+ read -e -p "$prompt"ans
+ res="$?"
+ if [[$res-ne 0 ]]; then
+ echo"Read aborted"
+ return 1
+ fi
+
+ [-z"$ans" ]&&ans="$default"
+
+ #
+ # Look for valid replies and return appropriate values. Print an error
+ # message otherwise and loop around for another go
+ #
+ if [[${ans^^} =~ ^Y(E|ES)?$ ]]; then
+ return 0
+ elif [[${ans^^} =~ ^NO?$ ]]; then
+ return 1
+ else
+ echo"Invalid reply; please use 'Y' or 'N'"
+ fi
+ done
+}
+
+
Lines 1-32 are the same as in the mark 2 version
+
Line 33 begins a while loop. This uses the built-in true command which just returns a true value. What this combination produces is an infinite loop. The loop ends at line 57.
+
Lines 37-42 print a prompt and read a response in the same way as lines 33-38 in the mark 2 version. Pressing CTRL-D in response to the prompt is detected here and the loop and the function are exited with a false value (1).
+
Line 44 checks to see if anything was typed and if not supplies the default. This is the same as line 40 in the mark 2 version.
+
Lines 50-56 are where the rest of the changes have been made.
+
+
Lines 50 and 51 test the upper case version of the returned response against a regular expression accepting ‘Y’, ‘YE’ or ‘YES’. If it matches then the loop and the function are exited with a true value (0).
+
If the first test does not match the test at line 52 is applied. This tests the upper case version of the returned response against a regular expression accepting ‘N’ or ‘NO’. If it matches then the loop and the function are exited with a false value (1).
+
If neither of the earlier tests matched then an error message is displayed at line 55, and the loop will then repeat the prompt and the tests.
+
+
+
A typical use of this function in a script might be:
+
if ! yes_no_mk3 'Do you want to continue? %s ' 'N'; then
+ echo "Finished"
+ return
+fi
+
This might result in the following dialogue:
+
Do you want to continue? [y/N] what
+Invalid reply; please use 'Y' or 'N'
+Do you want to continue? [y/N] yo
+Invalid reply; please use 'Y' or 'N'
+Do you want to continue? [y/N] nope
+Invalid reply; please use 'Y' or 'N'
+Do you want to continue? [y/N] no
+Finished
+
I’d say that the mark 3 version is more useful overall, and this is the one I shall be adopting myself. A copy of the mark 3 version of the function can be downloaded from here (with the name yes_no).
+
If you have any further additions to this function or comments about it then please let me know.
+
+
diff --git a/eps/hpr2096/hpr2096_functions.sh b/eps/hpr2096/hpr2096_functions.sh
new file mode 100755
index 0000000..e99d0dc
--- /dev/null
+++ b/eps/hpr2096/hpr2096_functions.sh
@@ -0,0 +1,104 @@
+#=== FUNCTION ================================================================
+# NAME: read_value
+# DESCRIPTION: Read a value from STDIN and handle errors.
+# PARAMETERS: 1 - Prompt string for the read
+# 2 - Name of variable to receive the result
+# 3 - Default value (optional)
+# RETURNS: 1 on error, otherwise 0
+#===============================================================================
+read_value () {
+ local prompt="${1:?Usage: read_value prompt outputname [default]}"
+ local outputname="${2:?Usage: read_value prompt outputname [default]}"
+ local default="${3:-}"
+ local var
+
+ if [[ $default != "" ]]; then
+ default="-i \"$default\""
+ fi
+
+ #
+ # Read (with eval to ensure everything's substituted properly)
+ #
+ eval "read -e $default -p '$prompt' var"
+ res="$?"
+ if [[ $res -ne 0 ]]; then
+ echo "Read aborted"
+ return 1
+ fi
+
+ #
+ # Return the value in the nominated variable
+ #
+ var="${var//\"/}"
+ eval "$outputname=\"$var\""
+ return 0
+}
+
+#=== FUNCTION ================================================================
+# NAME: check_value
+# DESCRIPTION: Checks a value against a list of regular expressions
+# PARAMETERS: 1 - the value to be checked
+# 2..n - valid regular expressions
+# RETURNS: 0 if the value checks, otherwise 1
+#===============================================================================
+check_value () {
+ local value="${1:?Usage: check_value value [list_of_regex]}"
+ local matches=0
+
+ #
+ # Drop parameter 1 then there should be more
+ #
+ shift
+ if [[ $# == 0 ]]; then
+ echo "Usage: check_value value [list_of_regex]"
+ return 1
+ fi
+
+ #
+ # Loop through the regex args checking the value, counting matches
+ #
+ while [[ $# -ge 1 ]]
+ do
+ if [[ $value =~ $1 ]]; then
+ (( matches++ ))
+ fi
+ shift
+ done
+
+ #
+ # No matches, then the value is bad
+ #
+ if [[ $matches == 0 ]]; then
+ return 1
+ else
+ return 0
+ fi
+}
+
+#=== FUNCTION ================================================================
+# NAME: read_and_check
+# DESCRIPTION: Reads a value (see read_value) and checks it (see check_value)
+# against an arbitrary long list of Bash regular expressions
+# PARAMETERS: 1 - Prompt string for the read
+# 2 - Name of variable to receive the result
+# 3 - Default value (optional)
+# 4..n - Valid regular expressions
+# RETURNS: Nothing
+#===============================================================================
+read_and_check () {
+ local prompt="${1:?Usage: read_and_check prompt outputname [default]}"
+ local outputname="${2:?Usage: read_and_check prompt outputname [default]}"
+ local default="${3:-}"
+
+ read_value "$prompt" "$outputname" "$default"
+ shift 3
+ until check_value "${!outputname}" $*
+ do
+ echo "Invalid input: ${!outputname}"
+ read_value "$prompt" "$outputname" "$default"
+ done
+
+ return
+}
+
+
diff --git a/eps/hpr2109/hpr2109_full_shownotes.html b/eps/hpr2109/hpr2109_full_shownotes.html
new file mode 100755
index 0000000..1a87eaa
--- /dev/null
+++ b/eps/hpr2109/hpr2109_full_shownotes.html
@@ -0,0 +1,93 @@
+
+
+
+
+
+
+
+ Hacking my inner ear (HPR Show 2109)
+
+
+
+
+
+
+
+
+
Hacking my inner ear (HPR Show 2109)
+
Dave Morriss
+
+
+
+
+
+
+
Table of Contents
+
+
+
Overview
+
In April 2015 I suddenly found myself getting dizzy as I bent down – to the extent where I actually fell over at one point. I went to see a doctor but didn’t get a diagnosis.
I am not medically trained, just an interested observer with a Science background. If you experience issues like the ones I am describing here you should seek medical advice: to determine whether it actually is BPPV, and to obtain properly qualified assistance to deal with it.
+
The Inner Ear
+
As a young student, unsure about whether to go into Biology or Medicine, I became fascinated by the structure of the human inner ear, and studied it extensively for a while. As it turned out there was a question on it for my A-level Zoology examination which I was very happy about!
+
The human inner ear performs two major functions:
+
+
The cochlea is responsible for hearing
+
The vestibular system is responsible for balance
+
+
I’ll skip over the hearing part in this episode, though I still find it fascinating. I am amazed at how it works, and as a tinnitus sufferer, how it malfunctions as you get older. The vestibular system is what I will concentrate on here in order to give insight into the BPPV condition.
+
Many of the texts used to describe the inner ear contain images such as the one shown here:
+
+Right osseous labyrinth. Image from Wikipedia
+
In the past I have misinterpreted this as a separate structure. However, the inner ear actually consists of a series of cavities within the temporal bone of the skull. The various cavities and passages are fluid filled (with perilymph), and within them lie membranous structures also full of fluid (endolymph).
+
The balance system consists of a chamber called the vestibule with three semicircular canals connected to it. These canals are oriented at right angles to each other and are responsible for sensing rotational movement (pitch, roll and yaw).
+
There are five sensory areas in this structure. Each of the semicircular canals has an enlarged region called the ampulla containing a collection of sensory cells that detect fluid movement caused by head rotations. There are also two sensory structures in the vestibule called the saccule and the utricle that detect head position in the horizontal and vertical planes.
+
The sensors in the vestibule are different from the others in that they have calcium carbonate crystals (called variously otoliths, otoconia or statoconia) attached on top. These are acted upon by gravity to provide information about head orientation.
+
The utricle is in a part of the inner ear which is connected to the semicircular canals. The saccule on the other hand is in a different part of the structure.
+
BPPV - Benign Paroxysmal Positional Vertigo
+
What happens in BPPV is that otoliths become dislodged from the utricle and migrate into one of the semicircular canals. Here they disrupt the normal process of sensing head rotation, which causes dizziness (vertigo).
+
The detachment of otoliths can be caused by a head injury, but its probability of spontaneous occurrence increases with age.
+
The condition BPPV is called benign in that it is not a threat to health, paroxysmal because it occurs in short bursts, positional because the changing of head position brings it on, and vertigo because it results in a spinning sensation.
+
Two common conditions will trigger a bout of BPPV: suddenly bending down, or turning over in bed.
+
The semicircular canals detect head rotation by the movement of fluid (endolymph) across the sensory cells. When the head is moved the fluid lags behind because of inertia, and this fluid movement bends the hairs of the sensory cells. The fluid then catches up and the system stabilises.
+
The dizziness is actually caused by the fluid movement, or the sensor cells themselves, being affected by the loose otoliths. This changes the normal behaviour of that particular semicircular canal resulting in sensations of movement when there is none.
+
Normally rotational head movement is linked to eye movement. The eyes compensate for the head movement by moving in the opposite direction, to keep distant images in sight. During BPPV erroneous rotational motion signals are received and the eyes move as they would if the rotation was real. This is called nystagmus and is used as a way of diagnosing BPPV.
+
Treatment
+
I read up about this condition at the time and found there were a number of procedures that could be used to diagnose BPPV and to treat it.
+
The simplest form of treatment consists of head movements designed to move the displaced otolith(s) from the semicircular canal back into the vestibule. I found a method called the Epley manoeuvre.
+
Before this manoeuvre can be used effectively it is necessary to determine where the otolith(s) are likely to be - which of the semicircular canals. I worked this out largely by guesswork, which is one reason why I would not recommend anyone else to do it this way. The direction of the nystagmus indicates which ear is affected but it’s difficult to view your own nystagmus.
+
The severity of the BPPV I experienced was fairly mild. All I suffered from was dizziness, and I managed to cope with it quite well after the first occurrence. I found that performing the Epley Manoeuvre once was sufficient to cure the problem, and since then I have had no recurrence.
+
I guess that I managed to “hack” my own inner ear more by luck than anything else!
+
+
diff --git a/eps/hpr2129/hpr2129_example1.awk b/eps/hpr2129/hpr2129_example1.awk
new file mode 100755
index 0000000..5813465
--- /dev/null
+++ b/eps/hpr2129/hpr2129_example1.awk
@@ -0,0 +1,2 @@
+/^a/ {print "A: " $0}
+/^b/ {print "B: " $0}
diff --git a/eps/hpr2129/hpr2129_example2.awk b/eps/hpr2129/hpr2129_example2.awk
new file mode 100755
index 0000000..42f6e56
--- /dev/null
+++ b/eps/hpr2129/hpr2129_example2.awk
@@ -0,0 +1,5 @@
+#!/usr/bin/awk -f
+#
+# Print all but line 1 with the line number on the front
+#
+NR > 1 { printf "%d: %s\n",NR,$0 }
diff --git a/eps/hpr2129/hpr2129_file1.csv b/eps/hpr2129/hpr2129_file1.csv
new file mode 100755
index 0000000..2ec0ce6
--- /dev/null
+++ b/eps/hpr2129/hpr2129_file1.csv
@@ -0,0 +1,10 @@
+name,color,amount
+apple,red,4
+banana,yellow,6
+strawberry,red,3
+grape,purple,10
+apple,green,8
+plum,purple,2
+kiwi,brown,4
+potato,brown,9
+pineapple,yellow,5
diff --git a/eps/hpr2129/hpr2129_file1.txt b/eps/hpr2129/hpr2129_file1.txt
new file mode 100755
index 0000000..5f719d6
--- /dev/null
+++ b/eps/hpr2129/hpr2129_file1.txt
@@ -0,0 +1,10 @@
+name color amount
+apple red 4
+banana yellow 6
+strawberry red 3
+grape purple 10
+apple green 8
+plum purple 2
+kiwi brown 4
+potato brown 9
+pineapple yellow 5
diff --git a/eps/hpr2129/hpr2129_full_shownotes.html b/eps/hpr2129/hpr2129_full_shownotes.html
new file mode 100755
index 0000000..f09dc64
--- /dev/null
+++ b/eps/hpr2129/hpr2129_full_shownotes.html
@@ -0,0 +1,221 @@
+
+
+
+
+
+
+
+ Gnu Awk - Part 2 (HPR Show 2129)
+
+
+
+
+
+
+
+
+
+
Gnu Awk - Part 2 (HPR Show 2129)
+
Dave Morriss
+
+
+
+
+
+
+
Table of Contents
+
+
+
Introduction
+
This is the second episode in a series where Mr. Young and I will be looking at the AWK language (more particularly its GNU variant gawk). It is a comprehensive interpreted scripting language designed to be used for manipulating text.
+
The name AWK comes from the names of the authors: Alfred V. Aho, Peter J. Weinberger, and Brian W. Kernighan. The original version of AWK was written in 19771 at AT&T Bell Laboratories. See the GNU Awk User’s Guide for the full history of awk and gawk.
+
Strictly the name of the language is AWK in capitals, but the command that is typed to invoke it is awk or gawk, so I will use the lower-case version throughout these notes unless it is important to differentiate the two. Nowadays, on most Linux distributions, awk and gawk are synonyms referring to GNU Awk.
+
I first encountered awk in the late 1980’s when I was working on a Digital Equipment Corporation (DEC) VAXCluster running OpenVMS. This operating system did not have any very good ways of manipulating text without writing a compiled program, which was something I frequently needed to do. A version of gawk was ported to OpenVMS around this time, which I installed. For me gawk (and sed) totally changed the way I was able to work on OpenVMS at that time.
+
Simple Awk Usage Recap
+
Invoking Awk
+
As we saw in the last episode, awk is invoked on the command line as:
[options] are the options accepted by the command, one of which, -F was introduced in the last episode
+
program is the awk program enclosed in single quotes; this may be preceded by -e (like sed) to make it clear that the program follows (where it might otherwise be ambiguous)
+
inputfile1 is the first file to be processed; there may be many; if the character - is given instead of a filename data is expected on standard input
+
+
What Awk does
+
Awk views its input data as a series of “records” (usually newline-delimited lines), where each record contains a series of “fields”. A field is a component of a record delimited by a “field separator”.
+
In the last episode field separators were whitespace (spaces, TABs and newlines), which is the default, or a comma (-F "," or -F,).
+
One of the features of awk is that it treats multiple space separators as one, as we saw in the last episode. There were multiple spaces between many of the fields of the test file.
+
Other separators are not treated this way, so with the following example record, assuming that the field separator is a comma, three fields are found, with the second one being of zero length:
+
a,,b
+
Awk program
+
As we saw in the last episode, an awk program consists of a series of rules where each rule consists of:
+
pattern { action }
+
Normally each rule begins on a new line in the program (though this is not mandatory). There are program components other than rules, but we’ll deal with these later on.
+
In a rule ‘pattern’ is used to identify a line in some way, and ‘{ action }’ defines what will be done to the line which has been matched by the pattern. Patterns can be simple comparisons, regular expressions, combinations of the two and quite a few other things that will be covered throughout the series.
+
A pattern may be omitted, in which case the action is applied to every record. Also, a rule can consist only of a pattern, in which case the entire record is written as if the action was { print } (which means print the record).
+
Awk programs are essentially data driven in that actions depend on the data, so they are quite a bit different from programs in many other programming languages.
+
More about fields and records
+
As was covered in episode 1, once Awk has separated an input record into fields they are stored as numbered entities. These are available by using a dollar sign followed by a number. So, $1 refers to field 1, $2 field 2, and so on. The variable $0 refers to the entire record in an un-split state.
+
The number after a dollar sign is actually an expression, so $2 and $(1+1) mean the same thing. This is an example of an arithmetic expression, and is a useful feature of awk.
+
There is a special variable called NF in which awk stores the number of fields it has found in the current record. This can be printed or used in tests as shown in the following example (which uses file1.txt introduced in episode 1):
+
$ awk '{ print $0 " (" NF ")" }' file1.txt | head -3
+name color amount (3)
+apple red 4 (3)
+banana yellow 6 (3)
+
(Note that we used ‘head -3’ to truncate the output here.)
+
The way in which print works in awk is: it takes a series of arguments which may be variables or strings and concatenates them together. Here we have $0, the record itself, followed by a string containing a space and an open parenthesis, the NF variable, and another string containing a close parenthesis.
+
As well as counting fields per record, awk also counts input records. The record number is held in the variable NR, and this can be used in the same was as we have seen with NF. For example, to print the record number before each line we could write:
+
$ awk '{ print NR ": " $0 }' file1.txt
+1: name color amount
+2: apple red 4
+3: banana yellow 6
+4: strawberry red 3
+5: grape purple 10
+6: apple green 8
+7: plum purple 2
+8: kiwi brown 4
+9: potato brown 9
+10: pineapple yellow 5
+
Note that writing the above with no spaces other than the one after print is completely acceptable (though potentially less clear):
+
$ awk '{print NR": "$0}' file1.txt
+
In the audio I wasn’t sure about this, but I have since checked.
+
More about printing
+
So far we have seen the print statement and have found that it is a little awkward to use to print a mixture of fixed text and variables. In particular, there is no interpolation of variables into strings as can be seen in other scripting languages (e.g. Bash).
+
There is also a printf statement in Awk. This is similar to printf in C and Bash. It takes a format argument followed by a comma-separated list of items. The argument list may be enclosed in parentheses.
+
printf format, item1, item2, ...
+
The format argument (or format string) defines how each of the other arguments is to be output. It uses format specifiers to do this, amongst which are ‘%s’ which means “output a string” and ‘%d’ for outputting a whole decimal number. For example, the following printf statement outputs the record followed by a parenthesised number of fields:
+
printf "%s (%d)\n",$0,NF
+
Note that, unlike print no newline is generated unless requested explicitly. The escape sequence ‘\n’ does this.
+
There are more format specifiers and more features of printf to be described, and these will be covered later in the series.
+
More about Awk programs
+
So far we have seen examples of simple awk programs written on the command line. For more complex programs it is usually preferable to place them in files. The option -f FILE may be used to invoke such a file containing a program. File example1.awk, included with this episode, is an example of this and holds the following:
+
/^a/ { print "A: " $0 }
+/^b/ { print "B: " $0 }
+
This would be run as follows:
+
$ awk -f example1.awk file1.txt
+A: apple red 4
+B: banana yellow 6
+A: apple green 8
+
It is the convention to give such files the extension .awk to make it clear that they hold an Awk program. This is not mandatory but it gives a useful clue to file managers and editors as to what the file is.
+
As you will have seen if you followed the sed series and other HPR episodes on scripting, an Awk program file can be made into a script by adding a #! line at the top and making it executable. The file example2.awk has been included with this episode to demonstrate this feature. It looks like this:
+
1
+2
+3
+4
+5
+
#!/usr/bin/awk -f
+#
+# Print all but line 1 with the line number on the front
+#
+NR > 1{printf"%d: %s\n",NR,$0}
+
Note that we added the path to the where the awk program may be found, and ‘-f’ to the first line. Without the option, awk will not read the rest of the file.
+
Note also that lines 2-4 are comments. Line 5 is the program which prints each line with a line number, but only if the number is greater than 1. Thus the header line is not printed.
+
The Awk file must be made executable for this to work:
+
$ chmod u+x example2.awk
+
Then it can be invoked as follows (assuming it is in the current directory):
+
$ ./example2.awk file1.txt
+2: apple red 4
+3: banana yellow 6
+4: strawberry red 3
+5: grape purple 10
+6: apple green 8
+7: plum purple 2
+8: kiwi brown 4
+9: potato brown 9
+10: pineapple yellow 5
+
Summary
+
This episode covered:
+
+
Awk’s concept of records and fields
+
How spaces as field separators are different from any other separators
+
How an Awk program is made up of ‘pattern { action }’ rules
+
How fields are referred to by a dollar sign followed by a numeric expression
+
The variables NF and NR which hold the number of fields and the record number
Back in 2015 Ken Fallon did a show (episode 1766) on how to use sox to truncate silence and speed up audio.
+
Inspired by this I wrote a Bash script to aid my use of the technique, which I thought I’d share with you.
+
Overview of the script
+
I called the script speedup although it performs the dual functions of speeding up and truncating silence.
+
The script is invoked thus:
+
$ speedup [options] filename
+
(If you didn’t place it somewhere in your PATH then you’d need to include the path to the script such as ./speedup if it’s in the current directory.)
+
The filename argument should be the full path of the audio file. Unless deleted with the -c option (see below) the script will rename the original file and create the modified file with the same name as the original. When finished processing the original file unmodified will have the name ‘NAME_.EXT’ with an underscore added after the original name as shown.
+
The options are used to select the various features. They are:
+
+
-s
+
This option causes the audio to be sped up. It can be
+repeated and the speed up is increased for every -s given.
+
+
-t
+
This option causes the audio to have silences truncated.
+It can be repeated to increase the sensitivity of the
+truncation.
+
+
-m
+
Mix-down multiple (stereo) tracks to mono.
+
+
-c
+
Delete the original file leaving the modified file behind with the
+same name as the original.
+
+
-d
+
Engage dry-run mode where the planned actions are reported but nothing
+is actually done.
+
+
-D
+
Run in DEBUG mode where more information is reported about what is
+going on.
+
+
-h
+
Print the help information.
+
+
+
As mentioned above, the speedup and truncate functions can be “turned up” by repeating the options. The script counts the number of times a -s or -t option occurs and uses that number to index a list of speeds or truncation parameters. We will look at how this is done and what the possibilities are later.
+
The options conform to the usual Unix standard and can be concatenated, so the following invocations are the same and perform three levels of speeding up and one of truncation:
#!/usr/bin/env bash
+#===============================================================================
+#
+# FILE: speedup
+#
+# USAGE: ./speedup [-s ...] [-t ...] [-m] [-c] [-d] [-D] [-h] filename
+#
+# DESCRIPTION: A script to perform a speedup and silence removal on a given
+# audio file
+#
+# OPTIONS: ---
+# REQUIREMENTS: ---
+# BUGS: ---
+# NOTES: ---
+# AUTHOR: Dave Morriss (djm), Dave.Morriss@gmail.com
+# VERSION: 0.0.4
+# CREATED: 2015-05-01 21:51:32
+# REVISION: 2016-04-22 11:35:08
+#
+#===============================================================================
+
+set-o nounset # Treat unset variables as an error
+
+SCRIPT=${0##*/}
+VERSION="0.0.4"
+
+#=== FUNCTION ================================================================
+# NAME: _usage
+# DESCRIPTION: Report usage
+# PARAMETERS: 1 - the exit value (so it can be used to return an error
+# value)
+# RETURNS: Nothing
+#===============================================================================
+_usage (){
+ localres="${1:-0}"
+
+ cat<<-endusage
+
+Usage: ${SCRIPT} [-s ...] [-t ...] [-m] [-c] [-d] [-D] [-h] filename
+
+Speeds up and truncates silence in an audio file
+
+Options:
+ -s This option if present causes the audio to be sped up.
+ The option can be repeated and the speed up is increased for
+ every -s given
+ -t This option if present causes the audio to to have silences
+ truncated. The option can be repeated to increase the
+ sensitivity of the truncation
+ -m Mix-down multiple (stereo) tracks to mono
+ -c Delete the original file leaving the modified file behind with
+ the same name as the original
+ -d Engage dry-run mode where the planned actions are reported
+ but nothing is actually done
+ -D Run in DEBUG mode where more information is reported
+ about what is going on
+ -h Print this help
+
+Arguments:
+ filename The full path of the audio file containing the podcast episode.
+
+Note:
+ Unless deleted with the -c option the script will rename the original file
+ and create the modified file with the same name as the original. The
+ original file will have the name 'NAME_.EXT' with an underscore added after
+ the original name.
+
+Version: $VERSION
+endusage
+ exit "$res"
+}
+
+#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
The first part consists of a comment, the declaration of a SCRIPT variable (taken from the $0 argument), and the version number.
+
This is followed by the definition of function _usage. This simply lists a “here document” using the cat <<-endusage statement near the top, and then exits using the argument as an exit value. The function is called to show how to use the script, so it’s not appropriate to run the script afterwards.
In this section a collection of variables associated with the options is initialised. The while loop then processes the options. Note how the s and t options increment the variables SPEEDUP and TRUNCATE. Otherwise, presence of an option turns on (sets to 1) the variables defined earlier.
+
The shift statement at the end of this chunk is needed to remove all of the (now processed) options from the argument list, leaving the filename as argument 1 ($1).
+
+#
+# Check there is one argument
+#
+if [[$#-ne 1 ]]; then
+ echo"Error: filename missing"
+ _usage 1
+fi
+
+#
+# Does the file given as an argument exist?
+#
+if [[!-e"$1" ]]; then
+ echo"File not found: $1"
+ exit 1
+fi
+
Now the script checks for the filename argument, aborting via function _usage if not found. It then checks to see if the file actually exists, and aborts with an error message if it doesn’t.
+
+if [[$DRYRUN-eq 1 ]]; then
+ echo"Dry run: no changes will be made"
+fi
+
This chunk simply detects the use of the -d (dry run) option and reports that it is on.
+
+#
+# Work out the speed-up we want (if any) and generate the argument to sox
+#
+SPEEDS=( 1.05 1.1 1.2 1.3 1.4 1.5 1.6 1.7 )
+if [[$SPEEDUP-eq 0 ]]; then
+ TEMPO=
+else
+ if [[$SPEEDUP-gt${#SPEEDS[@]} ]]; then
+ SPEEDUP=${#SPEEDS[@]}
+ fi
+ ((SPEEDUP--))
+ speed=${SPEEDS[$SPEEDUP]}
+ TEMPO="tempo ${speed}"
+fi
+
This chunk detects the speedup level and creates a TEMPO variable with the result. If there was no -s option then the variable is empty. If a value was given then it is checked to see that it doesn’t exceed the number of speeds defined. These speeds are defined in the variable SPEEDS which is an array. You can see the script caters for speeds of 1.05, 1.1, 1.2 and so forth up to 1.7. This list was created for my needs, you could redefine it according to your needs.
+
The speed count in variable SPEEDUP is decremented to index the array which starts at index zero, then the value is stored in variable speed and used to define variable TEMPO ready for use with sox.
+
+#
+# Work out the silence truncation parameters (if any). The first set trims
+# silence but ignores silences of 0.5 seconds in the middle (like pauses for
+# breaths). The second set removes everything but can make a rather rushed
+# result. See http://digitalcardboard.com/blog/2009/08/25/the-sox-of-silence/
+# for some advice.
+#
+TRUNCS=("1 0.1 1% -1 0.5 1%""1 0.1 1% -1 0.1 1%")
+if [[$TRUNCATE-eq 0 ]]; then
+ SILENCE=
+else
+ if [[$TRUNCATE-gt${#TRUNCS[@]} ]]; then
+ TRUNCATE=${#TRUNCS[@]}
+ fi
+ ((TRUNCATE--))
+ silence=${TRUNCS[$TRUNCATE]}
+ SILENCE="silence ${silence}"
+fi
+
This chunk does more or less the same as the preceding one for silence truncation. The main difference is that the array TRUNCS contains only two components and each one is a string of numbers. The setting of sound truncation parameters for sox is quite complicated. See the reference in the comments and show 1766 if you want to understand it. The end result is that the variable SILENCE contains the necessary parameter for sox.
+
+if [[$MIXDOWN== 0 ]]; then
+ REMIX=
+else
+ REMIX="remix -"
+fi
+
+#
+# Report some internals in debug mode
+#
+if [[$DEBUG-eq 1 ]]; then
+ echo"SPEEDUP = $SPEEDUP"
+ echo"TRUNCATE = $TRUNCATE"
+ echo"MIXDOWN = $MIXDOWN"
+ echo"speed = ${speed:-0}"
+ echo"silence = ${silence:-}"
+ echo"TEMPO = $TEMPO"
+ echo"SILENCE = $SILENCE"
+ echo"REMIX = $REMIX"
+fi
+
+#
+# Is there anything to do?
+#
+if [[-z$TEMPO && -z$SILENCE ]]; then
+ echo"Nothing to do; exiting"
+ exit 1
+fi
+
Next, the -m option is checked and the variable REMIX defined to contain the sox parameter which will result in the stereo audio being remixed to mono.
+
Then, if the -D (debug) option was provided the various settings are reported. This mainly of use to someone debugging or developing this script.
+
Lastly in this chunk the script checks to see if there is any work to do. If neither TEMPO nor SILENCE is set to anything then there is no need to continue, and it exits.
+
+#
+# Divide up the path to the file
+#
+orig="$(realpath"$1")"
+odir="${orig%/*}"
+oname="${orig##*/}"
+oext="${oname##*.}"
+
+#
+# The name of the original file will be changed to this
+#
+new="${odir}/${oname%.$oext}_.${oext}"
+
+#
+# Report the name of the input file
+#
+echo"Processing $orig"
+
+#
+# If the new name exists we already processed it
+#
+if [[-e$new ]]; then
+ echo"Oops! Looks like this file has already been sped up"
+ exit 1
+fi
+
+#
+# Rename the original file
+#
+if [[$DRYRUN-eq 1 ]]; then
+ printf"Dry run: rename %s to %s\n""$orig""$new"
+else
+ mv"$orig""$new"
+fi
+
This chunk works on the file, making a new name for it so the converted file can have the original name.
+
Firstly the script save the full pathname into variable orig (using realpath to sort out any links or relative directories). Then it parses the filename into the path, the filename and the extension. It then reassembles it adding an underscore after the filename in the variable new.
+
The script checks that the new file doesn’t exist, because if it does there’s a good chance that this audio file has been processed already, so it gives up.
+
Finally in this chunk the script renames the original file to the new name (or reports what it would do if we are in “dry run” mode).
+
+#
+# Speed up and remove long silences as requested
+# -S requests a progress display
+# -v2 adjusts the volume of the file that follows it on the command line by
+# a factor of 2
+# -V9 requests a very high (debug) level of verbosity (default -V2)
+# remix - mixes all stereo to mono
+#
+if [[$DRYRUN-eq 1 ]]; then
+ printf"Dry run: %s\n" \
+ "sox -S -v2 \"${new}\"\"${orig}\"${TEMPO}${REMIX}${SILENCE}"
+else
+ # [We want TEMP, REMIX and SILENCE to be word-split etc]
+ # shellcheck disable=SC2086
+ sox -S -v2 "${new}""${orig}"${TEMPO}${REMIX}${SILENCE}
+fi
+
This is the meat of the script. The sox program is given the various parameters which have been created. If “dry run” mode is on then the script just prints what it would do, but otherwise it processes the renamed file into the original filename with the chosen parameters.
+
As an aside, I use a Vim plugin called “Syntastic” which applies a syntax checker to various source files as they are saved during an edit, reporting any errors the checker finds. The checker for Bash is called “shellcheck” and some of its checks can be turned off with comments like:
+
# shellcheck disable=SC2086
+
This is necessary here because shellcheck objects to the fact that variables like “${TEMPO}” are not quoted. We do not want to quote them here otherwise sox will not get the necessary arguments like tempo 1.5. However, we do want to quote the filenames in case they contain spaces or other dangerous characters.
+
+#
+# Delete the original file if asked. Note that the script can't detect that
+# the audio has been sped up if this file is missing.
+#
+if [[$CLEANUP-eq 1 ]]; then
+ if [[$DRYRUN-eq 1 ]]; then
+ printf"Dry run: delete %s\n""$new"
+ else
+ rm -f "$new"
+ fi
+fi
+
+exit
+
+# vim: syntax=sh:ts=8:sw=4:ai:et:tw=78:fo=tcrqn21
+
Finally the script checks whether the -c option has requested the original (renamed) file be deleted. If so, the deletion request is reported in “dry run” mode or is actioned otherwise.
+
The comment on the last line is a so-called Vim “modeline” which contains settings for the Vim editor.
+
Conclusion
+
I use this as part of my podcast download workflow. In particular I process “The Linux Link Tech Show” thus:
Here db_list_episodes is a script which lists the paths to all of the episodes of a given podcast known to the database where I hold podcast data. The list is passed to the command xargs which runs speedup on each file as shown.
+
I have used this script regularly since I wrote it in May 2015. It does all that I want it to do at the moment, but in the next version I think I would change the logic which causes nothing to be done unless there are speed and silence truncation changes to be made. For example, since a number of podcasts I download from the BBC have surprisingly low sound compared to most others I’d quite like to amplify them.
+
I hope you find this script useful. Please contact me with any comments, corrections or improvements.
Normally I number the lines of scripts such as this in the notes. When trying to do so this time the tool I use to generate HTML notes (Pandoc) did not seem to like the fact that I chopped the script into chunks and misbehaved. Since the script is quite long I didn’t want to leave my annotations to the end, so went with the un-numbered chunks you see here.↩
+
+
+
+
+
+
+
diff --git a/eps/hpr2135/hpr2135_speedup.bash b/eps/hpr2135/hpr2135_speedup.bash
new file mode 100755
index 0000000..1b7214c
--- /dev/null
+++ b/eps/hpr2135/hpr2135_speedup.bash
@@ -0,0 +1,252 @@
+#!/usr/bin/env bash
+#===============================================================================
+#
+# FILE: speedup
+#
+# USAGE: ./speedup [-s ...] [-t ...] [-m] [-c] [-d] [-D] [-h] filename
+#
+# DESCRIPTION: A script to perform a speedup and silence removal on a given
+# audio file
+#
+# OPTIONS: ---
+# REQUIREMENTS: ---
+# BUGS: ---
+# NOTES: ---
+# AUTHOR: Dave Morriss (djm), Dave.Morriss@gmail.com
+# VERSION: 0.0.4
+# CREATED: 2015-05-01 21:51:32
+# REVISION: 2016-04-22 11:35:08
+#
+#===============================================================================
+
+set -o nounset # Treat unset variables as an error
+
+SCRIPT=${0##*/}
+VERSION="0.0.4"
+
+#=== FUNCTION ================================================================
+# NAME: _usage
+# DESCRIPTION: Report usage
+# PARAMETERS: 1 - the exit value (so it can be used to return an error
+# value)
+# RETURNS: Nothing
+#===============================================================================
+_usage () {
+ local res="${1:-0}"
+
+ cat <<-endusage
+
+Usage: ${SCRIPT} [-s ...] [-t ...] [-m] [-c] [-d] [-D] [-h] filename
+
+Speeds up and truncates silence in an audio file
+
+Options:
+ -s This option if present causes the audio to be sped up.
+ The option can be repeated and the speed up is increased for
+ every -s given
+ -t This option if present causes the audio to have silences
+ truncated. The option can be repeated to increase the
+ sensitivity of the truncation
+ -m Mix-down multiple (stereo) tracks to mono
+ -c Delete the original file leaving the modified file behind with
+ the same name as the original
+ -d Engage dry-run mode where the planned actions are reported
+ but nothing is actually done
+ -D Run in DEBUG mode where more information is reported
+ about what is going on
+ -h Print this help
+
+Arguments:
+ filename The full path of the audio file containing the podcast episode.
+
+Note:
+ Unless deleted with the -c option the script will rename the original file
+ and create the modified file with the same name as the original. The
+ original file will have the name 'NAME_.EXT' with an underscore added after
+ the original name.
+
+Version: $VERSION
+endusage
+ exit "$res"
+}
+
+#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+#
+# Default settings
+#
+CLEANUP=0
+DEBUG=0
+DRYRUN=0
+SPEEDUP=0
+TRUNCATE=0
+MIXDOWN=0
+
+#
+# Process options
+#
+while getopts :cDdhmst opt
+do
+ case "${opt}" in
+ c) CLEANUP=1;;
+ D) DEBUG=1;;
+ d) DRYRUN=1;;
+ s) ((SPEEDUP++));;
+ t) ((TRUNCATE++));;
+ m) MIXDOWN=1;;
+ h) _usage 1;;
+ *) _usage 1;;
+ esac
+done
+shift $((OPTIND - 1))
+
+#
+# Check there is one argument
+#
+if [[ $# -ne 1 ]]; then
+ echo "Error: filename missing"
+ _usage 1
+fi
+
+#
+# Does the file given as an argument exist?
+#
+if [[ ! -e "$1" ]]; then
+ echo "File not found: $1"
+ exit 1
+fi
+
+if [[ $DRYRUN -eq 1 ]]; then
+ echo "Dry run: no changes will be made"
+fi
+
+#
+# Work out the speed-up we want (if any) and generate the argument to sox
+#
+SPEEDS=( 1.05 1.1 1.2 1.3 1.4 1.5 1.6 1.7 )
+if [[ $SPEEDUP -eq 0 ]]; then
+ TEMPO=
+else
+ if [[ $SPEEDUP -gt ${#SPEEDS[@]} ]]; then
+ SPEEDUP=${#SPEEDS[@]}
+ fi
+ ((SPEEDUP--))
+ speed=${SPEEDS[$SPEEDUP]}
+ TEMPO="tempo ${speed}"
+fi
+
+#
+# Work out the silence truncation parameters (if any). The first set trims
+# silence but ignores silences of 0.5 seconds in the middle (like pauses for
+# breaths). The second set removes everything but can make a rather rushed
+# result. See http://digitalcardboard.com/blog/2009/08/25/the-sox-of-silence/
+# for some advice.
+#
+TRUNCS=( "1 0.1 1% -1 0.5 1%" "1 0.1 1% -1 0.1 1%" )
+if [[ $TRUNCATE -eq 0 ]]; then
+ SILENCE=
+else
+ if [[ $TRUNCATE -gt ${#TRUNCS[@]} ]]; then
+ TRUNCATE=${#TRUNCS[@]}
+ fi
+ ((TRUNCATE--))
+ silence=${TRUNCS[$TRUNCATE]}
+ SILENCE="silence ${silence}"
+fi
+
+if [[ $MIXDOWN == 0 ]]; then
+ REMIX=
+else
+ REMIX="remix -"
+fi
+
+#
+# Report some internals in debug mode
+#
+if [[ $DEBUG -eq 1 ]]; then
+ echo "SPEEDUP = $SPEEDUP"
+ echo "TRUNCATE = $TRUNCATE"
+ echo "MIXDOWN = $MIXDOWN"
+ echo "speed = ${speed:-0}"
+ echo "silence = ${silence:-}"
+ echo "TEMPO = $TEMPO"
+ echo "SILENCE = $SILENCE"
+ echo "REMIX = $REMIX"
+fi
+
+#
+# Is there anything to do?
+#
+if [[ -z $TEMPO && -z $SILENCE ]]; then
+ echo "Nothing to do; exiting"
+ exit 1
+fi
+
+#
+# Divide up the path to the file
+#
+orig="$(realpath "$1")"
+odir="${orig%/*}"
+oname="${orig##*/}"
+oext="${oname##*.}"
+
+#
+# The name of the original file will be changed to this
+#
+new="${odir}/${oname%.$oext}_.${oext}"
+
+#
+# Report the name of the input file
+#
+echo "Processing $orig"
+
+#
+# If the new name exists we already processed it
+#
+if [[ -e $new ]]; then
+ echo "Oops! Looks like this file has already been sped up"
+ exit 1
+fi
+
+#
+# Rename the original file
+#
+if [[ $DRYRUN -eq 1 ]]; then
+ printf "Dry run: rename %s to %s\n" "$orig" "$new"
+else
+ mv "$orig" "$new"
+fi
+
+#
+# Speed up and remove long silences as requested
+# -S requests a progress display
+# -v2 adjusts the volume of the file that follows it on the command line by
+# a factor of 2
+# -V9 requests a very high (debug) level of verbosity (default -V2)
+# remix - mixes all stereo to mono
+#
+#sox -S -v2 "${new}" "${orig}" -V9 ${TEMPO} remix - ${SILENCE}
+if [[ $DRYRUN -eq 1 ]]; then
+ printf "Dry run: %s\n" \
+ "sox -S -v2 \"${new}\" \"${orig}\" ${TEMPO} ${REMIX} ${SILENCE}"
+else
+ # [We want TEMP, REMIX and SILENCE to be word-split etc]
+ # shellcheck disable=SC2086
+ sox -S -v2 "${new}" "${orig}" ${TEMPO} ${REMIX} ${SILENCE}
+fi
+
+#
+# Delete the original file if asked. Note that the script can't detect that
+# the audio has been sped up if this file is missing.
+#
+if [[ $CLEANUP -eq 1 ]]; then
+ if [[ $DRYRUN -eq 1 ]]; then
+ printf "Dry run: delete %s\n" "$new"
+ else
+ rm -f "$new"
+ fi
+fi
+
+exit
+
+# vim: syntax=sh:ts=8:sw=4:ai:et:tw=78:fo=tcrqn21
diff --git a/eps/hpr2163/hpr2163_arithmetic_assignment_operators.awk b/eps/hpr2163/hpr2163_arithmetic_assignment_operators.awk
new file mode 100755
index 0000000..3e0afe0
--- /dev/null
+++ b/eps/hpr2163/hpr2163_arithmetic_assignment_operators.awk
@@ -0,0 +1,9 @@
+BEGIN{
+ x = 42; print "x is",x
+ x += 1; print "x += 1 is",x
+ x -= 1; print "x -= 1 is",x
+ x *= 2; print "x *= 2 is",x
+ x /= 2; print "x /= 2 is",x
+ x %= 5; print "x %= 5 is",x
+ x ^= 4; print "x ^= 4 is",x
+}
diff --git a/eps/hpr2163/hpr2163_color_count.awk b/eps/hpr2163/hpr2163_color_count.awk
new file mode 100755
index 0000000..dc0ce25
--- /dev/null
+++ b/eps/hpr2163/hpr2163_color_count.awk
@@ -0,0 +1,12 @@
+BEGIN {
+ FS=","
+ OFS=","
+ print "color,count"
+}
+NR != 1 {
+ count[$2]+=1
+}
+END {
+ for (color in count)
+ print color, count[color]
+}
diff --git a/eps/hpr2163/hpr2163_full_shownotes.epub b/eps/hpr2163/hpr2163_full_shownotes.epub
new file mode 100755
index 0000000..ee3f678
Binary files /dev/null and b/eps/hpr2163/hpr2163_full_shownotes.epub differ
diff --git a/eps/hpr2163/hpr2163_full_shownotes.html b/eps/hpr2163/hpr2163_full_shownotes.html
new file mode 100755
index 0000000..367c7a1
--- /dev/null
+++ b/eps/hpr2163/hpr2163_full_shownotes.html
@@ -0,0 +1,402 @@
+
+
+
+
+
+
+
+ Gnu Awk - Part 4 (HPR Show 2163)
+
+
+
+
+
+
+
+
+
+
Gnu Awk - Part 4 (HPR Show 2163)
+
Dave Morriss
+
+
+
+
+
+
+
Table of Contents
+
+
+
Introduction
+
This is the fourth episode of the series that Mr. Young and I are doing. These shows are now collected under the series title “Learning Awk”.
+
Recap of the last episode
+
Logical Operators
+
We have seen the operators ‘&&’ (and) and ‘||’ (or). These are also called Boolean Operators. There is also one more operator ‘!’ (not) which we haven’t yet encountered. These operators allow the construction of Boolean expressions which may be quite complex.
+
If you are used to programming you will expect these operators to have a precedence, just like operators in arithmetic do. We will deal with this subject in more detail later since it is relevant not only in patterns but also in other parts of an Awk program.
+
The next statement
+
We saw this statement in the last episode and learned that it causes the processing of the current input record to stop. No more patterns are tested against this record and no more actions in the current rule are executed. Note that “next” is a statement like “print”, and can only occur in the action part of a rule. It is also not permitted in BEGIN or END rules (more of which anon).
+
The BEGIN and END rules
+
The BEGIN and END elements are special patterns, which in conjunction with actions enclosed in curly brackets make up rules in the same sense that the ‘pattern {action}’ sequences we have seen so far are rules. As we saw in the last episode, BEGIN rules are run before the main ‘pattern {action}’ rules are processed and the input file is (or files are) read, whereas END rules run after the input files have been processed.
+
It is permitted to write more than one BEGIN rule and more than one END rule. These are just concatenated together in the order they are encountered by Awk.
+
Awk will complain if either BEGIN or END is not followed by an action since this is meaningless.
+
Variables, arrays, loops, etc
+
Learning a programming language is never a linear process, and sometimes reference is made to new features that have not yet been explained. A number of new features were mentioned in passing in the last episode, and we will look at these in more detail in this episode.
+
Explaining variables
+
We saw the built-in variables like NR and NF earlier in the series, and you saw in the last episode that you can create your own variables too. A variable, as in other languages, is simply a named storage area that can hold a value. The name must consist of letters, digits or the underscore. It may not start with a digit, and there is a difference between upper case and lower case letters (‘sum’, ‘Sum’ and ‘SUM’ are different variables). Such simple variables which can hold a single value are also called scalars.
+
A variable in Awk may contain a numeric value or a string. Awk deals with the conversion of one to another as appropriate (though sometimes it needs help).
+
In Awk, unlike many other languages, it is not necessary to initialise variables before using them. All variables start as an empty string which is converted to zero as appropriate.
+
Variable assignment
+
Variables are set to values using assignment such as:
+
count = 3
+
As you saw in the last episode there are many types of assignment, for example:
+
used += $3
+
This means increment the contents of variable ‘used’ by the contents of field 3. (There is an assumption here that field 3 contains a numeric value, of course.)
+
It’s a shorthand version of:
+
used = used + $3
+
This means add the contents of ‘used’ to the contents of field 3 and save the result back in ‘used’.
+
The first time the variable is incremented its contents are taken to be zero. This is normally bad practice in older and stricter compiled languages, but Awk is more forgiving.
+
Since we have now started to look at writing arithmetic expressions it is probably a good idea to review what the arithmetic operators are in Awk.
+
Arithmetic operators
+
It is important to note that all numbers in Awk are floating point numbers. This fact can catch you out in some edge cases, which we will try to highlight as the series progresses.
+
This list is based on the one from the GNU Awk User’s Guide. The operators are listed in order of their precedence, highest to lowest.
+
+
x ^ y
+
Exponentiation; x raised to the y power. ‘2 ^ 3’ has the value eight. There is a ‘**’ operator but is not standard, and therefore not portable, and will not be used here.
+
+
- x
+
Negation
+
+
+ x
+
Unary plus; this can be used to force Awk to convert a string to a number.
+
+
x * y
+
Multiplication
+
+
x / y
+
Division; because all numbers in awk are floating-point numbers, the result is not rounded to an integer – thus ‘3 / 4’ has the value 0.75, where in Bash ‘echo $((3/4))’ returns 0.
+
+
x % y
+
Remainder after x is divided by y. So ‘3 % 4’ is 3, ‘5 % 2’ is 1, and so on
+
+
x + y
+
Addition.
+
+
x - y
+
Subtraction.
+
+
+
Assignment operators
+
As you have seen arithmetic assignment operators (like +=) exist in Awk. These are a shorthand form of more verbose assignments. The following table lists these assignment operators (modified from the GNU Awk User’s Guide):
+
+
+
+
Operator
+
Effect
+
+
+
+
+
variable += increment
+
Add increment to the value of variable.
+
+
+
variable -= decrement
+
Subtract decrement from the value of variable.
+
+
+
variable *= coefficient
+
Multiply the value of variable by coefficient.
+
+
+
variable /= divisor
+
Divide the value of variable by divisor.
+
+
+
variable %= modulus
+
Set variable to its remainder by modulus.
+
+
+
variable ^= power
+
Raise variable to the power power.
+
+
+
+
Examples
+
See the associated Awk script called arithmetic_assignment_operators.awk:
+
1
+2
+3
+4
+5
+6
+7
+8
+9
+
BEGIN{
+ x =42;print"x is",x
+ x +=1;print"x += 1 is",x
+ x -=1;print"x -= 1 is",x
+ x *=2;print"x *= 2 is",x
+ x /=2;print"x /= 2 is",x
+ x %=5;print"x %= 5 is",x
+ x ^=4;print"x ^= 4 is",x
+}
+
Note that everything here is in a BEGIN rule because we don’t want to process a file, just run a little Awk program. Note also that semicolons are needed as statement separators when there are multiple statements on a line, but not otherwise.
+
When run it produces the following output:
+
$ awk -f arithmetic_assignment_operators.awk
+x is 42
+x += 1 is 43
+x -= 1 is 42
+x *= 2 is 84
+x /= 2 is 42
+x %= 5 is 2
+x ^= 4 is 16
+
Type conversion
+
As mentioned earlier, a variable in Awk may contain a numeric value or a string, at any point in time. When converting from a number to a string, then the conversion simply consists of a string version of the number. Converting from a string to a number requires the string to begin with a valid digit sequence.
+
$ awk 'BEGIN{s="9gag.com"; x=s+1; print x}'
+10
+
If the string contains no valid number at the start then it is converted to zero.
+
Awk will convert integer numbers (42), and floating point numbers (4.2), as well as exponential numbers (1E3):
(Note: the ‘g’ format-control letter is for printing general numbers)
+
Increment and decrement operators
+
In the last episode we saw the use of these operators which increment or decrement the value of a variable by one. There are similar operators in Bash, and these were covered in HPR episode 1951.
+
The formal definition of these operators is:
+
+
++variable
+
Increment variable, returning the new value as the value of the expression.
+
+
variable++
+
Increment variable, returning the old value of variable as the value of the expression.
+
+
--variable
+
Decrement variable, returning the new value as the value of the expression. (This expression is like ‘++variable’, but instead of adding, it subtracts.)
+
+
variable--
+
Decrement variable, returning the old value of variable as the value of the expression. (This expression is like ‘variable++’, but instead of adding, it subtracts.)
+
+
+
We will look at some examples of the use of these operators a little later.
+
Arrays
+
As well as the simple (scalar) variables we have seen, Awk also provides one-dimensional arrays1. These arrays are associative (also known as hashes).
+
An array has a name conforming to the rules for scalar variables mentioned earlier. Not surprisingly you cannot name an array the same as a simple variable.
+
An array is a means of storing multiple values, and these values are referenced by index values. Also, unlike most compiled languages, Awk’s arrays can be of any length and can be added to at will. They can also be deleted from, but we’ll deal with that later.
+
Given an array a, we might store a value in it thus:
+
a[1] = "HPR"
+
Here the array name is a, the index is 1 and the contents of a[1] is the string “HPR”.
+
If you are familiar with arrays in other languages you might assume that the index 1 is numeric. In fact, in Awk it is converted to a string because all array indices are strings because Awk arrays are not contiguous but are associative. Such arrays are indexed by arbitrary string values, making a sort of look-up table.
+
Thus in an example in the last episode we saw:
+
NR != 1 {
+ a[$2]++
+}
+
Here the Awk script was being used to produce a frequency count of colours in our example file file1.txt. Field 2 in this file is the name of a colour, so the meaning of a[$2]++ is:
+
+
Index the array a by the (string) contents of field 2. If the element does not exist create it. Since Awk is very relaxed about initialisation, this array element will be taken to be zero on creation, and will then be incremented to 1. If the element already exists then its previous value will be incremented.
+
+
If you were able to look into the resulting array the end result would be:
+
+
+
+
Index
+
Contents
+
+
+
+
+
brown
+
2
+
+
+
purple
+
2
+
+
+
red
+
2
+
+
+
yellow
+
2
+
+
+
green
+
1
+
+
+
+
So, this shows that there is an array element: a["brown"]. Contained in this array element is the number 2 because the colour ‘brown’ was encountered twice.
+
Note that we also know that the expression a[$2]++ achieves the same as the assignment a[$2]+=1.
+
Looping through arrays
+
In the last episode the concept of looping through an array to print it out was introduced. We saw:
+
for (b in a) {
+ print b, a[b]
+}
+
As is so often the case with learning to write scripts it is often useful to visit more advanced topics early on, even though the concepts behind them may not yet have have been properly established.
+
We have not yet examined looping and other statements in Awk, but since we want to be able to process entire arrays we need to look at this one now.
+
In brief, the ‘for’ statement provides a way to repeat a given set of statements a number of times. We will look at this statement and the related ‘while’ statement later in the series.
+
This variant of the ‘for’ statement allows the processing of arrays. It consists of the following components:
+
for (variable in array)
+ body
+
The expression ‘(variable in array)’ results in all of the index values in the nominated array being provided, one at a time. While the loop runs the variable is set to successive index values and the body is executed.
+
The body can consist of a single statement or a group of statements. If a group is used, then curly braces must be used to enclose them.
+
The order in which array index values are provided is not defined – different Awk version will use different orders. There are extensions within GNU Awk (gawk) which can control this but we will leave this until much later.
+
So, dealing with our example from last episode, we can modify it as follows (with spelling concessions due to the trans-Atlantic nature of this series):
This Awk script is available as color_count.awk. The array has been renamed from ‘a’ to ‘count’ because it holds counts (frequencies) of the number of times a colour is encountered. The array is indexed by the names of colours in field 2. When we loop through the array in the END rule we use a variable ‘color’ to store the latest index. Note that the unnecessary semicolons and curly braces have been removed (to demonstrate that they can be!).
In the last episode two more built-in (or predefined) variables were introduced. The first was FS, which we have encountered before, though not in such a form. The FS variable is set through the -F (or –field-separator) command-line option, so ‘-F ","’ on the command line is the same as the statement FS = "," in an Awk script. As we saw, the statement form needs to be in a BEGIN rule to be set early enough in the script.
+
$ awk -F "," 'BEGIN{print "FS is",FS}'
+FS is ,
+
Of course, FS controls the chosen field separator as has been explained earlier in the series.
+
In the last episode we also saw the OFS variable. This does not have a command-line equivalent. This variable, short for Output Field Separator, controls the format of the output record produced by the print statement. Normally it is set to a single space, so a print statement like the following separates its arguments with a single space:
+
$ awk 'BEGIN{print "Hello","World"}'
+Hello World
+
Note that omitting the comma results in the following:
+
$ awk 'BEGIN{print "Hello" "World"}'
+HelloWorld
+
This is because Awk has concatenated the two strings before handing them to the print statement.
+
The OFS variable can be a string if required:
+
$ awk 'BEGIN{OFS=" blurg "; print "Hello","World"}'
+Hello blurg World
+
The contents of OFS only affects the behaviour of the print statement, not printf:
+
$ awk 'BEGIN{OFS="\t"; printf "%s %s\n","Hello","World"}'
+Hello World
+
Here the first argument to the printf statement, the format string, specifies that two string arguments will be printed followed by a newline. The remaining arguments are the two strings. The contents of OFS have no effect on the output.
+
Summary
+
This episode covered:
+
+
A recap of the last episode
+
Variables: simple or scalar variables
+
Assignment of values to variables
+
Arithmetic operators used in arithmetic expressions
Actually, standard Awk provides a way of treating such arrays as multi-dimensional, and GNU Awk (gawk) provides true arrays of arrays, but this is rather advanced and non-portable!↩
+
+
+
+
+
+
+
diff --git a/eps/hpr2163/hpr2163_full_shownotes.pdf b/eps/hpr2163/hpr2163_full_shownotes.pdf
new file mode 100755
index 0000000..f63bdcf
Binary files /dev/null and b/eps/hpr2163/hpr2163_full_shownotes.pdf differ
diff --git a/eps/hpr2166/hpr2166_FC_1_60_360_0.png b/eps/hpr2166/hpr2166_FC_1_60_360_0.png
new file mode 100755
index 0000000..17ad60b
Binary files /dev/null and b/eps/hpr2166/hpr2166_FC_1_60_360_0.png differ
diff --git a/eps/hpr2166/hpr2166_FC_1_60_360_1.png b/eps/hpr2166/hpr2166_FC_1_60_360_1.png
new file mode 100755
index 0000000..dd06eee
Binary files /dev/null and b/eps/hpr2166/hpr2166_FC_1_60_360_1.png differ
diff --git a/eps/hpr2166/hpr2166_FC_1_60_360_2.png b/eps/hpr2166/hpr2166_FC_1_60_360_2.png
new file mode 100755
index 0000000..9d22127
Binary files /dev/null and b/eps/hpr2166/hpr2166_FC_1_60_360_2.png differ
diff --git a/eps/hpr2166/hpr2166_FC_1_60_360_3.png b/eps/hpr2166/hpr2166_FC_1_60_360_3.png
new file mode 100755
index 0000000..eea1663
Binary files /dev/null and b/eps/hpr2166/hpr2166_FC_1_60_360_3.png differ
diff --git a/eps/hpr2166/hpr2166_FC_1_60_360_4.png b/eps/hpr2166/hpr2166_FC_1_60_360_4.png
new file mode 100755
index 0000000..0054486
Binary files /dev/null and b/eps/hpr2166/hpr2166_FC_1_60_360_4.png differ
diff --git a/eps/hpr2166/hpr2166_FC_1_60_360_5.png b/eps/hpr2166/hpr2166_FC_1_60_360_5.png
new file mode 100755
index 0000000..61836a9
Binary files /dev/null and b/eps/hpr2166/hpr2166_FC_1_60_360_5.png differ
diff --git a/eps/hpr2166/hpr2166_full_shownotes.html b/eps/hpr2166/hpr2166_full_shownotes.html
new file mode 100755
index 0000000..ecb69fe
--- /dev/null
+++ b/eps/hpr2166/hpr2166_full_shownotes.html
@@ -0,0 +1,124 @@
+
+
+
+
+
+
+
+ How to use a Slide Rule (HPR Show 2166)
+
+
+
+
+
+
+
+
+
How to use a Slide Rule (HPR Show 2166)
+
Dave Morriss
+
+
+
+
+
+
+
Table of Contents
+
+
+
Introduction
+
In my show 1664, “Life and Times of a Geek part 1”, I spoke about using a slide rule as a schoolboy. As a consequence, I was asked if I would do a show on slide rules, and this is it (after a rather long delay).
+
What is a Slide Rule?
+
A slide rule is an analogue computer which can be used to do multiplication and division (amongst other mathematical operations). Most slide rules consist of a fixed portion with a central slot into which a sliding part fits. The top and bottom areas of the fixed part hold various different scales, and the slider is marked with its own scales. A transparent cursor slides over the top of the other parts and can be used to read from one scale to another.
+
+Slide Rule (Wikimedia)
+
I still have my slide rule from my schooldays, a plastic Faber-Castell version from the 1960’s.
+
+My old school slide rule looking very much worse for wear
+
Recently, while contemplating this HPR episode, I checked eBay to see whether slide rules were still available. Within the hour I had found an interesting-looking example, had placed a bid on it for £9.99, and won. It is also a Faber-Castell but mainly made of wood (possibly boxwood or mahogany) with ivory-like (celluloid) facings. It seems quite a bit older than my other one. It is a model 1/60/360, made in Bavaria, apparently from some time after 1935 when this style of model numbering began to be used.
+
+My newly acquired Faber-Castell 1/60/360
+
In researching it I found that the slide rule is actually split in two, with a spring steel spine which keeps the two halves together, and tensions the slot in which the slider runs. You can see some of this in the pictures.
+
How does a Slide Rule work?
+
Slide rules use logarithmic scales to perform multiplication and division.
+
What is a logarithm?
+
A logarithm of a number is the exponent to which a base must be raised to produce the number.
+
So, if the base is 10 (known as a common logarithm, written as ‘log10’) then 100 is 102, so the log10 of it is 2, and the log10 of 1000 (103) is 3. The Wikipedia page on the logarithm does a better job of explaining this than I can do.
+
At the time I was using a slide rule, back in the 1960’s, we were expected to know how to use logarithms and were each allocated a book of log tables. This allowed you to look up the common logarithm of a number, or to convert a logarithm back to a number.
+
The great advantage of logarithms is that multiplication can be achieved by addition, and division by subtraction. In other words, the following rules apply for any base b:
+
logb(xy)=logb(x)+logb(y)
+
logb(x/y)=logb(x)−logb(y)
+
Provided b, x and y are positive and b is not 1.
+
So, at school when multiplying two numbers, the process was to take the first multiplicand, look up its log10, write it down, then do the same for the second multiplicand and add the two logarithms together. The result could then be looked up in an “anti-log” table to get the product of the two original numbers.
+
If you want to go further with this look at the wikiHow article below for details of how to use logarithmic tables.
+
John Napier
+
As an aside, the inventor of logarithms, John Napier, lived in Edinburgh and was born in 1550 in Merchiston Tower, otherwise known as Merchiston Castle. The original grounds of the tower is now the site of Edinburgh Napier University, and the tower is part of their Merchiston Campus. I live in Edinburgh, and have visited this site on many occasions.
+
+Merchiston Castle (Wikimedia)
+
The slide rule as a short-cut to using logarithms
+
With a slide rule the process uses logarithmic scales but short-circuits the table look-ups.
+
The operation of a slide rule is covered quite well on the Wikipedia page referenced in the Links section below.
+
Multiplication
+
We have already seen that the process of multiplication using logarithms is transformed into a process of addition. So the example shows the multiplication of 3 by 2. The sliding scale is positioned so that the 1 is positioned over the 2 on the scale below it. Looking at the 3 on the sliding scale the answer of 6 can be seen below it.
+
On my Faber Castell 1/60/360 I used the upper scale to achieve the same result (since it’s a little bit easier to see):
+
+
+Calculating 3 times 2
+
The same can of course be achieved by placing the 1 on the sliding scale against the 3 on the upper scale and reading from the 2 on the sliding scale:
+
+
+Calculating 2 times 3
+
Division
+
Taking the Wikipedia example of 5.5 divided by 2, on my Faber Castell 1/60/360 again, the 5.5 mark on the slider is aligned with the 2 mark on the upper scale and the result, 2.75 read off the slider under the 1 on the upper scale.
+
+
+Calculating 5.5 divided by 2
+
Further Study
+
The International Slide Rule Museum offers many resources for the slide rule enthusiast. If you are interested in learning more about how to use a slide rule then they have a self-guided course with a virtual slide rule.
+
In addition, you could consider obtaining a real slide rule. There are many to be had for not very much money on eBay. Apart from the Faber-Castell I bought myself for £10, and have been demonstrating here, I bought two more Faber-Castell models, costing less than £20 for both.
+
+
diff --git a/eps/hpr2166/hpr2166_slide_rule.png b/eps/hpr2166/hpr2166_slide_rule.png
new file mode 100755
index 0000000..dffdecc
Binary files /dev/null and b/eps/hpr2166/hpr2166_slide_rule.png differ
diff --git a/eps/hpr2173/hpr2173_blinkt_client.py b/eps/hpr2173/hpr2173_blinkt_client.py
new file mode 100755
index 0000000..baebcc4
--- /dev/null
+++ b/eps/hpr2173/hpr2173_blinkt_client.py
@@ -0,0 +1,76 @@
+#!/usr/bin/env python2
+
+from blinkt import set_pixel, show, clear
+
+import paho.mqtt.client as mqtt
+
+"""
+Based on the 'mqtt.py' script provided by Pimoroni.
+This one talks to a Mosquitto broker on the same host and assumes that the
+Blinkt is connected to the local machine.
+"""
+
+MQTT_SERVER = "localhost"
+MQTT_PORT = 1883
+MQTT_TOPIC = "pimoroni/blinkt"
+
+# Set these to use authorisation
+MQTT_USER = None
+MQTT_PASS = None
+
+def on_connect(client, userdata, flags, rc):
+ print("Connected with result code "+str(rc))
+
+ client.subscribe(MQTT_TOPIC)
+
+def on_message(client, userdata, msg):
+
+ data = msg.payload.split(',')
+ command = data.pop(0)
+
+ if command == "clr" and len(data) == 0:
+ clear()
+ show()
+ return
+
+ if command == "rgb" and len(data) == 4:
+ try:
+ pixel = data.pop(0)
+
+ if pixel == "*":
+ pixel = None
+ else:
+ pixel = int(pixel)
+ if pixel > 7:
+ print("Pixel out of range: " + str(pixel))
+ return
+
+ r, g, b = [int(x) & 0xff for x in data]
+
+ print(command, pixel, r, g, b)
+
+ except ValueError:
+ print("Malformed command: " + str(msg.payload))
+ return
+
+ if pixel is None:
+ for x in range(8):
+ set_pixel(x, r, g, b)
+ else:
+ set_pixel(pixel, r, g, b)
+
+ show()
+ return
+
+
+client = mqtt.Client()
+client.on_connect = on_connect
+client.on_message = on_message
+
+if MQTT_USER is not None and MQTT_PASS is not None:
+ print("Using username: {un} and password: {pw}".format(un=MQTT_USER, pw="*" * len(MQTT_PASS)))
+ client.username_pw_set(username=MQTT_USER, password=MQTT_PASS)
+
+client.connect(MQTT_SERVER, MQTT_PORT, 60)
+
+client.loop_forever()
diff --git a/eps/hpr2173/hpr2173_blinkt_legends.svg b/eps/hpr2173/hpr2173_blinkt_legends.svg
new file mode 100755
index 0000000..a3a9ef3
--- /dev/null
+++ b/eps/hpr2173/hpr2173_blinkt_legends.svg
@@ -0,0 +1,604 @@
+
+
+
+
diff --git a/eps/hpr2173/hpr2173_cronjob_comments b/eps/hpr2173/hpr2173_cronjob_comments
new file mode 100755
index 0000000..f6dd8af
--- /dev/null
+++ b/eps/hpr2173/hpr2173_cronjob_comments
@@ -0,0 +1,69 @@
+#!/bin/bash -
+#===============================================================================
+#
+# FILE: cronjob_comments
+#
+# USAGE: ./cronjob_comments
+#
+# DESCRIPTION: Runs 'scrape_comments' every so often through Cron. This
+# returns the number of comments awaiting approval.
+#
+# OPTIONS: ---
+# REQUIREMENTS: ---
+# BUGS: ---
+# NOTES: This version is for running on Pi Zero 1, it just lights the
+# Blinkt and doesn't run the pop-up code
+# AUTHOR: Dave Morriss (djm), Dave.Morriss@gmail.com
+# VERSION: 0.0.2
+# CREATED: 2016-07-07 10:43:51
+# REVISION: 2016-09-11 22:10:33
+#
+#===============================================================================
+
+set -o nounset # Treat unset variables as an error
+
+SCRIPT=${0##*/}
+
+#
+# Directories and files
+#
+BASEDIR="$HOME/HPR/Community_News"
+LOGS="$BASEDIR/logs"
+LOGFILE="$LOGS/$SCRIPT.log"
+
+#
+# LED number on the Blinkt
+#
+LED=1
+
+#
+# Simple sanity check
+#
+SCRAPER="$BASEDIR/scrape_comments"
+if [[ ! -e $SCRAPER ]]; then
+ echo "$SCRAPER was not found"
+ exit 1
+fi
+
+#
+# Capture output and result. The script returns the number of comments and
+# prints a message which we log
+#
+message="$($SCRAPER)"
+result=$?
+echo "$(date +%Y%m%d%H%M%S) $message" >> "$LOGFILE"
+
+if [[ $result -gt 0 ]]; then
+ #
+ # Send stuff to the local Blinkt!, pixel $LED
+ #
+ mosquitto_pub -t pimoroni/blinkt -m "rgb,$LED,255,255,0"
+else
+ #
+ # Turn the pixel off, there are no comments
+ #
+ mosquitto_pub -t pimoroni/blinkt -m "rgb,$LED,0,0,0"
+fi
+
+# vim: syntax=sh:ts=8:sw=4:ai:et:tw=78:fo=tcrqn21
+
diff --git a/eps/hpr2173/hpr2173_full_shownotes.html b/eps/hpr2173/hpr2173_full_shownotes.html
new file mode 100755
index 0000000..fb9920d
--- /dev/null
+++ b/eps/hpr2173/hpr2173_full_shownotes.html
@@ -0,0 +1,184 @@
+
+
+
+
+
+
+
+ Driving a Blinkt! as an IoT device (HPR Show 2173)
+
+
+
+
+
+
+
+
+
Driving a Blinkt! as an IoT device (HPR Show 2173)
+
Dave Morriss
+
+
+
+
+
+
+
Table of Contents
+
+
+
Introduction
+
I managed to buy a Raspberry Pi Zero when they first came out in December 2015. This was not easy since they were very scarce. I also bought a first-generation case from Pimoroni and some 40-pin headers. With the Zero this header is not pre-installed and it’s necessary to solder it onto the Pi yourself.
+
I have had various project ideas for this Pi Zero, but had not decided on one until recently. Within the last month or two Pimoroni produced a device called the Blinkt! which has eight APA102 RGB LEDs and attaches to the GPIO header. This costs £5, just a little more than the Zero itself.
+
My plan was to combine the two and turn them into a status indicator for various things going on that needed my attention.
+
Making an LED Indicator
+
The plan was to mount the Zero inside something where the LEDs could be clearly seen and to label them in some way so that the significance of their signal could be easily determined.
+
I found a small but fairly deep picture frame in IKEA in their RIBBA range and decided to use that. The one I picked up has external dimensions 12.5cm by 17.4cm and is 3.4cm deep. The internal size is 10cm by 15cm. It has a silvered finish. There is a piece of glass in it and behind that is a piece of hardboard1 with fixtures for hanging the frame or standing it on a flat surface.
+
I wanted to mount the Zero behind the hardboard, drilling holes in it to allow the lights to shine through. The Zero in its case was to be held on with new longer nylon bolts. As mentioned, the case is the early Pimoroni model and as a consequence the nylon bolts are M2.5 20mm, not an easy size to find. The later cases use M3 bolts.
+
I made a design in Inkscape which would act as a template for drilling the holes and could be placed in the finished frame to label each of the lights. The original plan was to use the paper the design was printed on to act as a light diffuser, since the LEDs are quite bright, and to provide space for the legends.
+
Getting everything in just the right position in the template turned out to be quite difficult, but I learned much more about Inkscape than I knew before while I was doing it.
+
The Inkscape SVG file is included with this show in case you want to use it (be aware that it contains a hidden layer I used to line things up).
+
Pictures of the Pi Zero
+
+Pi Zero in a case, with Blinkt!, LED 6 on
+
+Pi Zero with Blinkt! fixed to the hardboard
+
+Pi Zero with wireless dongle bolted to picture frame backing
+
+Front of backing plate (with not very well aligned holes)
+
+Pi Zero on backing plate, with power cable, mounted in the RIBBA frame
+
+Front view of finished frame
+
Experimenting with the Blinkt!
+
As usual with products from Pimoroni there are lots of great hints and tips on how to use them. The Blinkt! is probably best driven from Python, and there is a library to make it easier, which can be installed from the Pimoroni site.
+
Within the repository on GitHub there are many example Python scripts, most of which I tried out. These are a great resource for developing your own code, especially if (like me) you’re not experienced with Python.
+
A note about Pi security
+
I have never been happy about leaving my Raspberry Pi’s with the default “pi” username and “raspberry” password. Quite a few tools assume that you have done this. Also, it’s assumed that the pi account is enabled to run sudo without a password.
+
I normally disable the pi account and/or change its password. I also remove it from the /etc/sudoers (use visudo to do this).
+
I normally create a dave account with a complex password (generated with KeePassX), and give it sudo access, with a password.
+
When using the Blinkt! it is assumed that every script accessing the GPIO needs to run as root. The Blinkt! is attached to the GPIO of course. The GPIO device driver uses file system files under the directory /sys/class/gpio/ and these are owned by user root and group gpio.
+
I searched for ways in which I could access the GPIO without prefixing every command with sudo (and typing the password). I didn’t find a clear answer, but I had dealt with a similar situation when making one of my Pi’s a printer and scanner driver, so I tried giving my account dave membership of the gpio group:
+
sudo usermod -G gpio dave
+
This was the answer, and after that the need to use sudo had disappeared.
+
I would much prefer that this was standard practice and was well documented. Learning how to use the Raspberry Pi should also include learning some basic security practices I believe.
+
Note that user pi also has gpio group membership:
+
$ id pi | fold
+uid=1000(pi) gid=1000(pi) groups=1000(pi),4(adm),20(dialout),24(cdrom),27(sudo),
+29(audio),44(video),46(plugdev),60(games),100(users),101(input),108(netdev),999(
+spi),998(i2c),997(gpio)
+
I used the “fold” command (pre-installed in Raspbian) to wrap the long line to its default width of 80 characters in this example. I did this for these notes.
+
Communicating with the Pi Zero
+
So, knowing that I could control the Blinkt! with scripts on the Pi Zero, I next wanted to come up with a way to control it remotely. I was reluctant to design a communications infrastructure, write my own listener and make it accept remote commands, so I looked for alternatives.
+
My first point of reference was a queuing system called ZeroMQ which I had heard about from various places, and recently on the Changelog podcast. As I looked into this it seemed like overkill for this project.
+
I next looked at MQTT which is much more lightweight. I remembered I had been to a talk on this at OggCamp 2012 given by Andy Piper, and recalled mention of a system called Mosquitto in relation to this. This protocol (amongst others?) is being used in The Internet of Things apparently.
+
I soon found that I could install the mosquitto server and clients on the Pi Zero with very little trouble:
+
sudo apt-get install mosquitto mosquitto-clients
+
This gave me the Mosquitto server and some clients. The server was set up to run at boot time without any intervention on my part, and can be controlled as root with the service command. The clients consist of the commands mosquitto_sub and mosquitto_pub.
+
What MQTT does
+
The design of MQTT is based around a publish/subscribe or “pub/sub” model. This requires a message broker whose job is to pass messages from publisher to subscriber. It knows which messages to send where by filtering them based on an attribute called the topic.
+
A publisher might be a temperature sensor or a doorbell sending a message in response to an event, and a subscriber might be a heating system, or an audio or visual alert system receiving the message and performing an action. Thus the temperature sensor controls the room temperature and the doorbell makes a sound or flashes a light.
+
The Mosquitto broker is the server mentioned earlier, and the commands mosquitto_pub and mosquitto_sub are examples of a publisher and a subscriber interface.
+
A Python library paho-mqtt exists to allow scripts to be written to interface with this system. This can be installed thus:
+
sudo pip install paho-mqtt
+
Pimoroni provide an example script in their blinkt repository called mqtt.py which demonstrates the use of this library.
+
My first version listener script
+
I slightly modified the mqtt.py script from Pimoroni to do what I wanted.
+
I renamed it blinkt_client.py and modified it to connect to localhost where the Mosquitto broker is running. It is an MQTT subscriber so it needs to spend its time listening for messages. I left the topic as pimoroni/blinkt and used the standard port 1883.
+
The script is most simply run in the background, so I added a crontab entry which starts it up when the Pi is rebooted:
+
@reboot $HOME/blinkt_client.py&
+
This script uses the original Pimoroni design and expects messages of two forms (text extracted from mqtt.py):
+
rgb,<pixel>,<r>,<g>,<b> - Set a single pixel to an RGB colour. Example: rgb,1,255,0,255
+clr - Clear Blinkt!
+
If the pixel value is ‘*’ then all pixels are set to the chosen RGB value.
+
My first publisher
+
For the first publisher using this system I imported a pair of scripts I had been running on my desktop machine to help with the moderation of comments on the HPR website. This pair consists of a Bash script which is run every 15 minutes from cron, called cronjob_comments. It runs a Perl script called scrape_comments which scrapes a page on the HPR website to detect if there are any new comments which require attention2. If the Perl script finds anything it tells the Bash script, which uses the following command:
This sends a message to the MQTT broker on the same machine with the pimoroni/blinkt topic. The message payload causes the pixel identified by $LED to turn on with a sort of yellowish colour (RGB #FFFF00).
+
If there is no work to do then the equivalent command is:
At the time of recording I have not completed this. I want the next version to offer a similar interface but I’d also like to be able to blink an LED and change colour briefly and revert to the previous colour. Once I have finished this I might do another HPR show about it.
+
Other ways I’m using the notification system
+
Aside from the local web scraper plus notification already described I have a few other functions running and more planned.
+
New HPR shows
+
I have a web scraper running on my main desktop system, which I described in HPR show # 1971 about my BlinkStick. This lights the BlinkStick red when new shows have been sent in to HPR and alerts me to process the show notes. When the processing is done the checking script (which runs every 30 minutes) notices and clears the light.
+
I added the Blinkt! display to this script. I installed Mosquitto on the desktop and use the mosquitto_pub command to tell the Zero to make LED 0 red, and turn it off at the same time as the BlinkStick. I will probably stop using the BlinkStick for this task in due course.
+
Mail notification
+
I use Thunderbird to handle my mail on my desktop machine. I have several POP and IMAP mail accounts and use a large number of filters to move incoming mail into folders (or the SPAM bucket). I use a Thunderbird add-on called Mailbox Alert to alert me to mail that needs particular attention. This tool can perform various actions when mail arrives in a folder. I use a range of sounds from Freesound.org (many of which I have edited) for several folders.
+
The Mailbox Alert tool can also run a script. I have written a very simple one which uses mosquitto_pub to make LED 6 green. At the moment this only happens when I get email from my son.
+
Turning off the mail alert LED is a problem. I want to be able to do this when the message that turned it on has been read. I have written a Perl script which can scan a mail folder (which is a file in mbox format) for unread messages, and I am experimenting with running this from cron. It’s not a lightweight process, so I don’t want to run it too frequently. This is very much a work in progress.
+
Planned IRC notifier
+
I use weechat with IRC. This program has a powerful plugin capability. I am looking at the possibility of writing a plugin which will alert me to an IRC message by turning on an LED on the Blinkt!. This is just an idea at the moment.
+
Using MQTT with the BlinkStick
+
As I have said I installed Mosquitto on my workstation. I wrote another listener (based on an example on the BlinkStick website) which registers with the local message broker and drives the BlinkStick. I am using this as a development platform for my experiments in Python scripting. It required some work to bring up to date since it was written in 2013.
+
I am using this set-up in conjunction with the Pi Zero. Every 30 minutes the Zero runs a script under the control of cron which calls mosquitto_pub and tells the BlinkStick listener to flash the LED. This was because the Zero had been disappearing from the home WiFi network and I wanted reassurance that it was still alive!
+
Conclusion
+
I have very little interest in the Internet of Things when it requires access to a remote server to turn my lights on. However, I’m excited about the possibilities when I have full control over all of the components. I have found that MQTT in the shape of Mosquitto is simple to set up and use and has great potential for building communicating systems.
+
I have to admit that it’s slightly eerie if I have to get up in the night and I see the “New HPR comments” LED glowing brightly in the dark! It’s also quite cool though.
+
The various bits of code demonstrated here are not yet available on a Git repository. It is planned to release them at some point in the future.
+
Around the time I started messing with this project Jezra was also building an MQTT project. He is a much more experienced Python programmer than I am so checking out his blog might be worth your while.
Apparently what is known as hardboard in the UK is called high-density fiberboard in the USA.↩
+
I collect information about the comment status from the stats page↩
+
+
+
+
+
+
+
diff --git a/eps/hpr2173/hpr2173_img_01.png b/eps/hpr2173/hpr2173_img_01.png
new file mode 100755
index 0000000..04e652f
Binary files /dev/null and b/eps/hpr2173/hpr2173_img_01.png differ
diff --git a/eps/hpr2173/hpr2173_img_02.png b/eps/hpr2173/hpr2173_img_02.png
new file mode 100755
index 0000000..370d2c5
Binary files /dev/null and b/eps/hpr2173/hpr2173_img_02.png differ
diff --git a/eps/hpr2173/hpr2173_img_03.png b/eps/hpr2173/hpr2173_img_03.png
new file mode 100755
index 0000000..dd88b83
Binary files /dev/null and b/eps/hpr2173/hpr2173_img_03.png differ
diff --git a/eps/hpr2173/hpr2173_img_04.png b/eps/hpr2173/hpr2173_img_04.png
new file mode 100755
index 0000000..63fcc16
Binary files /dev/null and b/eps/hpr2173/hpr2173_img_04.png differ
diff --git a/eps/hpr2173/hpr2173_img_05.png b/eps/hpr2173/hpr2173_img_05.png
new file mode 100755
index 0000000..9373af7
Binary files /dev/null and b/eps/hpr2173/hpr2173_img_05.png differ
diff --git a/eps/hpr2173/hpr2173_img_06.png b/eps/hpr2173/hpr2173_img_06.png
new file mode 100755
index 0000000..25e1d40
Binary files /dev/null and b/eps/hpr2173/hpr2173_img_06.png differ
diff --git a/eps/hpr2202/hpr2173_cronjob_comments b/eps/hpr2202/hpr2173_cronjob_comments
new file mode 100755
index 0000000..f6dd8af
--- /dev/null
+++ b/eps/hpr2202/hpr2173_cronjob_comments
@@ -0,0 +1,69 @@
+#!/bin/bash -
+#===============================================================================
+#
+# FILE: cronjob_comments
+#
+# USAGE: ./cronjob_comments
+#
+# DESCRIPTION: Runs 'scrape_comments' every so often through Cron. This
+# returns the number of comments awaiting approval.
+#
+# OPTIONS: ---
+# REQUIREMENTS: ---
+# BUGS: ---
+# NOTES: This version is for running on Pi Zero 1, it just lights the
+# Blinkt and doesn't run the pop-up code
+# AUTHOR: Dave Morriss (djm), Dave.Morriss@gmail.com
+# VERSION: 0.0.2
+# CREATED: 2016-07-07 10:43:51
+# REVISION: 2016-09-11 22:10:33
+#
+#===============================================================================
+
+set -o nounset # Treat unset variables as an error
+
+SCRIPT=${0##*/}
+
+#
+# Directories and files
+#
+BASEDIR="$HOME/HPR/Community_News"
+LOGS="$BASEDIR/logs"
+LOGFILE="$LOGS/$SCRIPT.log"
+
+#
+# LED number on the Blinkt
+#
+LED=1
+
+#
+# Simple sanity check
+#
+SCRAPER="$BASEDIR/scrape_comments"
+if [[ ! -e $SCRAPER ]]; then
+ echo "$SCRAPER was not found"
+ exit 1
+fi
+
+#
+# Capture output and result. The script returns the number of comments and
+# prints a message which we log
+#
+message="$($SCRAPER)"
+result=$?
+echo "$(date +%Y%m%d%H%M%S) $message" >> "$LOGFILE"
+
+if [[ $result -gt 0 ]]; then
+ #
+ # Send stuff to the local Blinkt!, pixel $LED
+ #
+ mosquitto_pub -t pimoroni/blinkt -m "rgb,$LED,255,255,0"
+else
+ #
+ # Turn the pixel off, there are no comments
+ #
+ mosquitto_pub -t pimoroni/blinkt -m "rgb,$LED,0,0,0"
+fi
+
+# vim: syntax=sh:ts=8:sw=4:ai:et:tw=78:fo=tcrqn21
+
diff --git a/eps/hpr2202/hpr2202_full_shownotes.html b/eps/hpr2202/hpr2202_full_shownotes.html
new file mode 100755
index 0000000..b52ec7f
--- /dev/null
+++ b/eps/hpr2202/hpr2202_full_shownotes.html
@@ -0,0 +1,210 @@
+
+
+
+
+
+
+
+ Makers on YouTube (HPR Show 2202)
+
+
+
+
+
+
+
+
+
Makers on YouTube (HPR Show 2202)
+
Dave Morriss
+
+
+
+
+
+
+
Table of Contents
+
+
+
Introduction
+
I have always enjoyed making stuff. I was born and brought up in the 1940’s and 1950’s when the UK was recovering from WW2, and in my experience everyone I knew repaired and made stuff. Most of them grew their own food as well.
+
I have never been particularly good at making stuff, but I have built some basic furniture, built storage solutions for the house, built a rabbit hutch and run for my children’s pets, and so on and so forth.
+
In high school, even though I went to a Grammar School, all boys attended mandatory lessons on metalwork and woodwork. We learnt how to use hand tools and some power tools, make joints in wood, we also learnt to do basic metal work like soldering and brazing, and so forth.
+
Learning this stuff at school was great but I have used the woodworking techniques more than the metalwork - other than soldering.
+
I stopped watching TV in 2013, preferring reading and listening to podcasts. In recent times I have subscribed to a number of YouTube channels which share woodworking and metalworking techniques and projects. In general these people are Makers and Artists who can turn their hands to many skills. I thought I would share some of my favourites via HPR.
+
Some of my favourite Maker channels
+
A lot of the makers I have subscribed to earn a living on YouTube, but not all. In the list below I have included all or part of the channel description from YouTube and have given my impressions. It was difficult to make the list any shorter than this. I am subscribed to quite a few more channels.
Channel description: Woodworking videos and instruction. This is Paul Sellers’ channel where he shares his woodworking experience. The videos are mostly to show what you can do with wood but partially instructional as well.
+
Country: United Kingdom
+
This was the first channel I followed. I was interested in making a workbench from the basic sort of wood available in UK DIY stores, and Paul Sellers had a series on how to do this using largely manual methods. His episodes are usually long and very detailed.
Channel description: Architecture at a small scale expressed through woodworking and film making.
+
Country: United States
+
Frank’s woodworking and design skills are superb, and his video skills are amazing too. Many of his videos use impressive stop-motion techniques. He trained in architecture, which probably accounts for some of this. He has what to me is the best workshop I have ever seen; I recommend watching the series on how he created it. He collects and refurbishes old workshop machinery and has built his own very large CNC. His website is here.
Channel description: I make stuff for a living, what you see me do here is my Job. I have been using tools for over 40 years. I have developed my comfort level with tools through years of experience. DON’T DO THE DANGEROUS THINGS I DO. Thank You for watching and subscribing!
+
Country: United States
+
Jimmy is one of the few makers to have a Wikipedia page. He is a very skilled maker, artist and designer from New York who seems at home making anything in any medium and solving any problem. He also teaches.
Channel description: Videos about woodworking, taking more of an engineering perspective on things. This channels started out as a place to have videos to go with the articles on my website (http://woodgears.ca). However, since then, the videos have taken on a life of their own. But most videos still have a corresponding article on my website - see links in video descriptions.
+
Country: Canada
+
Matthias trained as an engineer and is now primarily a woodworker, but will turn his hand to many forms of making. In particular he has built many of the machines in his workshop and invented a device called a Pantorouter. You can find details of other projects on his website. I particularly like the way he makes gear-powered devices using plywood for the gears, which he draws with (Windows) software of his own.
Channel description: I like to make all sorts of stuff, with all sorts of materials. I have lots of projects including woodworking, metalworking, electronics, 3D printing, prop making and more! These videos are my attempt at teaching, inspiring and empowering others to make the stuff that they want to have. Hopefully you’ll see something here that will inspire you to make something that you’re passionate about!
+
Country: United States
+
Bob Clagett, the channel owner, is a maker who is constantly experimenting with new ways of building and making things. He has programming experience and uses Arduinos in projects he builds. He has learnt to weld and makes metal-based items as well as building with wood and various other materials.
Channel description: Hey everyone! My name is Matt and I love woodworking. I build fine furniture using my own blend of hand and power tools and I start with cutting down a tree. My designs have a clean straight line look but I also really enjoy building period pieces. My videos aim to motivate others to challenge themselves and try something new.
+
Country: United States
+
The host is an exquisite woodworker. He has skill in felling trees and milling them into boards and has been building all manner of beautiful items from the result. In recent times, having learn to weld, he has been building his own huge bandsaw mill capable of cutting up amazingly large tree trunks.
Channel description: These videos are for entertainment purposes only.
+
Country: United States
+
Jay is primarily a woodworker. He also does some impressive workshop and DIY projects. The maple and walnut boxes he has recently made have been particularly beautiful.
Channel description: Jon Peters Art & Home teaches and inspires you to make art, woodworking, and home improvement projects at home. Whether you’re a beginning artist, a practical do-it-yourselfer, or a professional craftsman, my instructional videos will provide a how-to guide to great projects that anyone can create. I will show you the tools, plans, and tricks of the trade to bring art and design to your home. New how-to videos every week to provide “Inspiration for Creative Living”. Enjoy and get inspired!
+
Country: United States
+
Jon is a woodworker and artist and is also skilled in metalwork. His projects are very high quality and good to learn from. There is quite a range of subjects on his channel, from beekeeping to painting, cooking to making furniture.
Channel description: Woodworking projects for the beginner and advanced woodworker. I show that making and building is fun and rewarding for all skill levels. I focus on design and originality and show you that anyone can be a creative woodworker.
+
Country: United States
+
David Picciuto, the channel owner, is a woodworker who has some very original ideas and designs for projects. He also uses a CNC, a laser cutter, and recenly a 3D printer.
Alain, also known as The Woodpecker (L’gosseux d’bois in French), produces videos in French and English. As a hobbyist he built himself a large workshop, which is documented on his channel. He works mainly in wood making a wide variety of projects - and is not afraid to document his mistakes!
Channel description: I am an obsessed DIYer and Woodworker. I’m not professional or have any training, so I just pick the project I want to tackle and figure it out step by step. I picked up my first tool in January of 2013. I couldn’t afford to buy the things I wanted around my home, so I decided to try my hand at making them instead. I was hooked after my first project so I just never stopped. I put out a video as well as a written tutorial on all the projects I do to improve my home and workshop. You can find my website at Wilkerdos.com
+
Country: United States
+
It has been fascinating to watch April develop her wood working and more recently metal working skills. She seems to take on more challenging projects each time.
Channel description: Making things is what I like to do. I am going to put build videos on Fridays showing how to make a variety of things. Wednesdays I do my vlog or “talkie talkie” going over previous builds and talking about general shop stuff. If you are a builder, maker or DIYer this channel is for you.
+
Country: United States
+
Some interesting projects, mainly in wood, with lots of very useful hints and tips.
Channel description: My name is Linn and this is the Darbin Orvar channel. I build stuff and make videos.
+
Country: United States
+
Linn, the channel owner, originates from Sweden. She makes a wide variety of projects, often in wood, but also creates a number of other things such as leather items and electronic devices.
Channel description: Videos about woodworking, homemade machines, wood-turning, making jigs and more stuff from me out of my small basement workshop. I try to post one video a week, or at least one every two weeks…but I don’t really care about a fixed upload schedule, because I produce my videos with a lot of editing. And I spend as much time as it takes until I think the video is finished. So yeah, I upload when I upload. You probably will see more videos about making tools for woodworking than actually woodworking projects…that’s kind of my interest.
+
Country: Germany
+
I have only recently found Marius’ channel. He is very skilled for someone so young, making very impressive equipment and tools for his workshop. His videos are very well made and often quite humorous. He is currently at university studying Engineering.
Channel description: Watch me make all kinds of stuff in my shop! Build Videos on Sundays. Vlogs every other week.
+
Country: Germany
+
Laura has only been on YouTube since 2015, and I have not been subscribed for very long. She makes a wide variety of projects which embody impressive artistic, design and construction skills.
Channel description: This is all about inspiring you to build stuff with your own hands! I’m Cristiana and I make pretty much everything you see and hear and I love developing my work on lots of different medias and processes.
+
Country: Portugal
+
The channel is run by Cristiana who trained as a sculptor but has developed some impressive skills in woodworking and making videos.
Channel description: Videos on woodworking and workshop related projects. Check out my website for hundreds of cool projects: www.ibuildit.ca. Me and my shop are in Ontario, Canada. I’m a carpenter by trade and have worked in commercial construction since 1985.
+
Country: Canada
+
John makes devices and tools, mainly from wood. He has some very original solutions to common workshop problems. He has two other channels.
+
+
+
Podcasts
+
A few of the makers also produce podcasts. There may be more than the following list but these are the ones I listen to.
Feed description: The audio only version of the BrainPick video series. It’s a live Q&A, hosted on YouTube, with Bob Clagett (Iliketomakestuff.com) and a special guest.
+
This series seems to be on hiatus at the moment, but the interviews to date have been very interesting to listen to
Feed description: Making It is a biweekly audio podcast hosted by Jimmy Diresta, Bob Clagett and David Picciuto. Three different makers with different backgrounds talking about creativity, design and making things with your bare hands.
+
This is quite a long-running podcast, but I have only started listening to it in the past few months. I think it’s excellent, and covers some great topics.
Feed description: Weekly podcast that discusses upcycling and making with reclaimed materials. Hosts: Phil Pinsky, Tim Sway, and Bill Lutes
+
This is a relatively recent addition to my podcast list. I am following the hosts’ YouTube channels as well but don’t have much to say about them since I have only recently subscribed. It’s an interesting and often very amusing podcast.
This one is fairly new, having just reached episode 27. I follow all of the hosts’ YouTube channels and have included them in my earlier list. I enjoy the dynamic of these three in podcast form.
I have been listening to podcasts for many years. I started in 2005, when I bought my first MP3 player.
+
Various podcast downloaders (or podcatchers) have existed over this time, some of which I have tried. Now I use a script based on Bashpodder, which I have built to meet my needs. I also use a database to hold details of the feeds I subscribe to, what episodes have been downloaded, what is on a player to be listened to and what can be deleted. I have written many scripts (in Bash, Perl and Python) to manage all of this, and I will be describing the overall workflow in this episode without going into too much detail.
Note: I’m embarrassed to say that I started this episode in April 2016 and somehow forgot all about it until January 2017!
+
Podcast Feeds
+
A podcast feed is defined by an XML file, using one of two main formats. These formats are called RSS and Atom. Both formats basically consist of a list of structured items each of which can contain a link to a multimedia file or “enclosure”. It’s the enclosure that makes it a podcast as opposed to other sorts of feeds - see the Wikipedia article on the subject.
+
The way in which the feed is intended to be used is that when new material is released on the site, the feed is updated to reflect the change. Then podcatchers can monitor the feed for changes and take action when an update is detected. The relevant action with a podcast feed is that the enclosures in the feed are downloaded, and the podcatcher maintains a local list of what has already been downloaded.
+
The structure of an RSS or Atom feed allows for there to be a unique identifier associated with each enclosure, and this is intended to act as a label for that enclosure to make it easier to to avoid duplicates.
+
Workflow
+
Bashpodder
+
I use a rewritten version of Bashpodder to download my podcasts. I have modified the original design in two main ways:
+
+
I enhanced the XSLT file (parse_enclosure.xsl) used for parsing the feed (using xsltproc1) so that it can handle feeds using Atom as well as RSS. The original only handled RSS.
+
I made it keep a file of ID strings from the feeds to help determine which episode has already been downloaded. The original only kept the episode URLs which was fine at the time, but is not enough in these days of idiosyncratic feeds. My XSLT file is called parse_id.xsl.
+
+
My Bashpodder clone cannot deal with feeds where the enclosure URL does not show the actual download URL. I am working on a solution to this but haven’t got a good one yet. Charles in NJ mentions a fix for a similar (or maybe the same) problem in his show 1935 “Quick Bashpodder Fix”.
+
I run this script on one of my Raspberry Pi’s once a day during the night. This was originally done because I had a slow ADSL connection which was being quite heavily used by my kids during the day. The Pi in question places the downloads in a directory which I export with NFS and mount on other machines.
+
Database
+
As I have already said, I use a database to hold the details of my feeds and downloads. This came about because of several reasons:
+
+
I’m interested in databases and want to learn how to use them
+
I chose PostgreSQL because it is very feature-rich and flexible, and at the time I was using it at work.
+
I wanted to be able to generate all sorts of reports and perform all kinds of actions based on the contents of the database
+
+
The database runs on my workstation rather than on the server.
+
As far as design is concerned, I “bolted on” the database to the existing Bashpodder model where podcasts are downloaded and stored in a directory according to the date. Playlists were generated by the original Bashpodder for each day’s episodes, and I have continued to do this until fairly recently.
+
Really, if using a database in this way, it would be better to integrate the podcatcher with it. However, I didn’t do this because of the way it evolved.
+
As a result I have scripts which I run each morning whose job it is to look at the night’s downloads and update the database with their details. The long-term plan is to write a whole new system from scratch which integrates everything, but I don’t see this happening for a while.
+
In my database I have the following main tables:
+
+
feeds
+
Contains the feed details like its title and URL. It also classifies each feed into a group like science or documentary
+
+
episodes
+
Contains the items within the feeds with information like the title, the URL of the media, where the downloaded episode is and the feed the episode belongs to.
+
+
groups
+
This table contains the groups I have defined, like comedy and music. This is just my personal classification
+
+
players
+
The database has a list of all the players I own. I did a show about this in 2014.
+
+
playlists
+
I make my own playlists for each player, and these are stored in the database (and on the player).
+
+
+
Audio tags
+
Many podcasters generate excellent metadata for their episodes. All of the players I use on a regular basis run Rockbox, and it can display the metadata tags which helps me to work out what I’m listening to and what’s coming next. I also like to look at tags when I’m dealing with podcast episodes on my workstation, so I reckon having good quality metadata is important.
+
Because a number of podcast episodes have poor or even non-existent tags I wanted to write tools to improve them. I originally wrote a tool called fix_tags, which has been used on the HPR server for several years, and is available on GitHub. I also wrote a tag management tool for daily use.
+
The daily tool is called tag_manager and it scans all of the podcast episodes I currently have on disk and applies tag rules to them. Rules are things like: “if there is no title tag, add one from the title field of the item in the feed”. I also do things like add a prefix to the title in some cases, such as adding ‘HPR’ to all HPR episodes so it’s easier to identify them individually in a list.
+
The rules are written in a format which is really ugly, but it works. I have plans to develop my own rule “language” at some point.
+
Here’s the rule for the BBC “Elements” podcast:
+
<rule "Elements">
+ genre = $default_genre
+ year = "\".(defined(\$ep_year) ? \$ep_year : \$fileyear).\""
+ album = "Elements"
+ comment = "\".clean_string(\$comment).\""
+ # If no title, use the enclosure title
+ <regex "^\s*$">
+ match = title
+ title = "\$ep_title"
+ </regex>
+ # If no comment, use the enclosure description
+ <regex "^\s*$">
+ match = comment
+ comment = "\$ep_description"
+ </regex>
+ # Add 'Elements:' to the front of the title if it's not there
+ <regex "^(?!Elements: )(\S.+)$">
+ match = title
+ title = "Elements: \$1"
+ </regex>
+</rule>
+
Writing episodes to a player
+
I use tools I have written to copy podcast episodes to whichever player I want to use. Normally I listen to everything on a given player then refill it after re-charging it. I usually write podcast episodes in groups, so I might load a particular player with groups like business, comedy, documentary, environment, and history.
+
As episodes are written their status is updated in the database and a playlist is created. The playlist is held in the database but is also written to a file on the player. Rockbox has the ability to work from pre-defined playlist files, and this is the way I organise my listening on a given player.
+
Deleting what I’ve listened to
+
As I listen to an episode I run a script on my workstation to mark that particular episode as “being listened to”, and when I have finished a given episode I run another script to delete it. The deletion script simply looks for episodes in the “being listened to” state and asks which of these to delete.
+
This way I make sure that episodes are deleted as soon as possible after listening to them. I never explicitly delete episodes from the players, I simply over-write them when I next load a particular player.
+
Other tools
+
A lot of other tools have been developed for viewing the status of the system, fixing problems and so forth. Some of the key tools are:
+
+
A feed viewer: it summarises the feed and any downloaded episodes. It can generate reports in a variety of formats. I used it to generate the notes for two HPR shows (1516, 1518) I did on the podcast feeds I’m subscribed to.
+
A tool for subscribing to a new feed; this is the point at which the feed is assigned to a group and where it is decided which episodes are to be initially downloaded.
+
A tool for cancelling a subscription: such feeds are held in an archive with notes about why they were cancelled - for the sake of posterity. Also, I have been known to re-subscribe to a feed I have cancelled. The subscribing script checks it in the archive and asks if I really want to do this and why I said I wanted to cancel last time!
+
+
Conclusions
+
I have been fiddling about with this way of doing things for a long time. I seem to have started in 2011 and since that time have kept a journal associated with the project. This currently contains over 8000 lines of notes about what I have been doing, problems, solutions, etc.
+
What’s good about this scheme?
+
+
It’s pretty much all mine! I was inspired originally by Bashpodder, but the current script is a complete rewrite.
+
It works, and does pretty much all I want it to do and now needs very little effort to run and maintain.
+
Along the way I have learned tons of stuff. For example:
+
+
I understand XML and XSLT better
+
I understand RSS and Atom feeds better
+
I know a lot more about Bash scripting, though I’m still learning!
+
I have learned a fair bit more about PostgreSQL and databases in general
+
I understand a fair bit more about audio tags and the TagLib library that I use to manipulate them (both in Perl and Python)
+
+
It does have what I think are a lot of good ideas about how to deal with podcast feeds and episodes, though these are often implemented badly in my scripts.
+
+
What’s bad?
+
+
It’s clunky and badly designed. It’s the result of hacks layered on hacks. It’s really an alpha version of what I want to implement and should be junked and completely rewritten.
+
It is not sufficiently resilient to feed issues and bad practices by feed owners. For example, the BBC have this strange habit of releasing an episode then re-releasing it a while later for reasons unknown. They make it difficult to recognise the re-release for what it is, so I sometimes get duplicates. Other podcatchers deal with this situation better than my system does.
+
It’s not easy to extend. For example, the current trend of “hiding” podcast episodes behind strange URLs which have to be interrogated through layers of redirection to find the actual name of the file containing the episode. Adding an algorithm to handle this is quite challenging, due to the design.
+
It’s completely incapable of being shared. I’d have liked to offer my efforts to the world, but in its current incarnation it’s absolutely not something anyone else would want.
I had forgotten the name of the parsing tool xsltproc when recording the audio, so added it in the notes.↩
+
+
+
+
+
+
+
diff --git a/eps/hpr2211/hpr2211_parse_enclosure.xsl b/eps/hpr2211/hpr2211_parse_enclosure.xsl
new file mode 100755
index 0000000..0e8df03
--- /dev/null
+++ b/eps/hpr2211/hpr2211_parse_enclosure.xsl
@@ -0,0 +1,17 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/eps/hpr2211/hpr2211_parse_id.xsl b/eps/hpr2211/hpr2211_parse_id.xsl
new file mode 100755
index 0000000..7b550dd
--- /dev/null
+++ b/eps/hpr2211/hpr2211_parse_id.xsl
@@ -0,0 +1,17 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/eps/hpr2238/hpr2238_contacts.awk b/eps/hpr2238/hpr2238_contacts.awk
new file mode 100755
index 0000000..815170f
--- /dev/null
+++ b/eps/hpr2238/hpr2238_contacts.awk
@@ -0,0 +1,45 @@
+#!/usr/bin/awk -f
+
+#
+# Define separators
+#
+BEGIN{
+ #
+ # The field separator is a newline
+ #
+ FS = "\n"
+
+ #
+ # The record separator is two newlines since there's a blank line between
+ # contacts.
+ #
+ RS = "\n\n"
+
+ #
+ # On output write a line of hyphens on a new line
+ #
+ ORS = "\n----\n"
+}
+
+{
+ #
+ # Show where the "beginning of buffer" is
+ #
+ sub(/\`/, "[")
+
+ #
+ # Show where the "end of buffer" is
+ #
+ sub(/\'/, "]")
+
+ #
+ # Show where the start and end of "line" are
+ #
+ sub(/^/, "{")
+ sub(/$/, "}")
+
+ #
+ # Print the buffer with a record number and a field count
+ #
+ print "(" NR "/" NF ")", $0
+}
diff --git a/eps/hpr2238/hpr2238_contacts.txt b/eps/hpr2238/hpr2238_contacts.txt
new file mode 100755
index 0000000..dd9de1c
--- /dev/null
+++ b/eps/hpr2238/hpr2238_contacts.txt
@@ -0,0 +1,60 @@
+Name: Robin Richardson
+First: Robin
+Last: Richardson
+Email: rrichardson0@163.com
+Gender: Female
+
+Name: Anne Price
+First: Anne
+Last: Price
+Email: aprice1@cam.ac.uk
+Gender: Female
+
+Name: Annie Warren
+First: Annie
+Last: Warren
+Email: awarren2@huffingtonpost.com
+Gender: Female
+
+Name: Dorothy Turner
+First: Dorothy
+Last: Turner
+Email: dturner3@amazon.co.jp
+Gender: Female
+
+Name: Barbara Gonzales
+First: Barbara
+Last: Gonzales
+Email: bgonzales4@diigo.com
+Gender: Female
+
+Name: Shawn Spencer
+First: Shawn
+Last: Spencer
+Email: sspencer5@usda.gov
+Gender: Male
+
+Name: Heather Anderson
+First: Heather
+Last: Anderson
+Email: handerson6@imgur.com
+Gender: Female
+
+Name: Benjamin Wells
+First: Benjamin
+Last: Wells
+Email: bwells7@bbc.co.uk
+Gender: Male
+
+Name: Elizabeth Little
+First: Elizabeth
+Last: Little
+Email: elittle8@prlog.org
+Gender: Female
+
+Name: Joshua Snyder
+First: Joshua
+Last: Snyder
+Email: jsnyder9@dot.gov
+Gender: Male
+
diff --git a/eps/hpr2238/hpr2238_full_shownotes.epub b/eps/hpr2238/hpr2238_full_shownotes.epub
new file mode 100755
index 0000000..66cf216
Binary files /dev/null and b/eps/hpr2238/hpr2238_full_shownotes.epub differ
diff --git a/eps/hpr2238/hpr2238_full_shownotes.html b/eps/hpr2238/hpr2238_full_shownotes.html
new file mode 100755
index 0000000..de1f5c6
--- /dev/null
+++ b/eps/hpr2238/hpr2238_full_shownotes.html
@@ -0,0 +1,488 @@
+
+
+
+
+
+
+
+ Gnu Awk - Part 6 (HPR Show 2238)
+
+
+
+
+
+
+
+
+
+
Gnu Awk - Part 6 (HPR Show 2238)
+
Dave Morriss
+
+
+
+
+
+
+
Table of Contents
+
+
+
Introduction
+
This is the sixth episode of the “Learning Awk” series that Mr. Young and I are doing.
+
Recap of the last episode
+
Regular expressions
+
In the last episode we saw regular expressions in the ‘pattern’ part of a ‘pattern {action}’ sequence. Such a sequence is called a ‘RULE’, (as we have seen in earlier episodes).
+
$1 ~ /p[elu]/ {print $0}
+
Meaning: If field 1 contains a ‘p’ followed by one of ‘e’, ‘l’ or ‘u’ print the whole line.
+
$2 ~ /e{2}/ {print $0}
+
Meaning: If field 2 contains two instances of letter ‘e’ in sequence, print the whole line.
+
It is usual to enclose the regular expression in slashes, which make it a regexp constant (see the GNU Manual for the details of these constants).
+
We had a look at many of the operators used in regular expressions in episode 5. Unfortunately, some small errors crept into the list of operators mentioned in that episode. These are incorrect:
+
+
\A (beginning of a string)
+
\z (end of a string)
+
\b (on a word boundary)
+
\d (any digit)
+
+
The first two operators exist, as does the last one, but only in languages like Perl and Ruby, but not in GNU Awk.
+
For the ‘\b’ sequence the GNU manual says:
+
+
In other GNU software, the word-boundary operator is ‘\b’. However, that conflicts with the awk language’s definition of ‘\b’ as backspace, so gawk uses a different letter. An alternative method would have been to require two backslashes in the GNU operators, but this was deemed too confusing. The current method of using ‘\y’ for the GNU ‘\b’ appears to be the lesser of two evils.
+
+
The corrected list of operators is discussed later in this episode.
+
Replacement
+
Last episode we saw the built-in functions that use regular expressions for manipulating strings. These are sub, gsub and gensub. Regular expressions are used in other functions but we will look at them later.
+
We will be looking at sub, gsub and gensub in more detail in this episode.
+
More about regular expressions
+
More regular expression operators
+
We have seen that the regular expressions in GNU Awk use certain characters to denote concepts. For example, ‘.’ is not a full-stop (period) in a regular expression, but means any character. This special meaning can be turned off by preceding the character by a backslash ‘\’. Since a backslash is itself a special character, if you need an actual backslash in a regular expression then precede it with a backslash (‘\\’). We will demonstrate how the backslash might be used in the examples later.
+
Note that (as with GNU sed) some regular expression operators consist of a backslash followed by a character.
+
The following table summarises some of the regular expression operators, including some we have already encountered.
+
+
+
+
Expression
+
Meaning
+
+
+
+
+
any character
+
A single ordinary character matches itself
+
+
+
.
+
Matches any character
+
+
+
*
+
Matches a sequence of zero or more instances of the preceding item
+
+
+
[list]
+
Matches any single character in list: for example, [aeiou] matches all vowels
+
+
+
[^list]
+
A leading ‘^’ reverses the meaning of list, so that it matches any single character not in list
+
+
+
^
+
Matches the beginning of the line (anchors the search at the start)
+
+
+
$
+
Matches the end of the line (anchors the search at the end)
+
+
+
+
+
Similar to * but matches a sequence of one or more instances of the preceding item
+
+
+
?
+
Similar to * but matches a sequence of zero or one instance of the preceding item
+
+
+
{i}
+
Matches exactly i sequences (i is a decimal integer)
+
+
+
{i,j}
+
Matches between i and j sequences, inclusive
+
+
+
{i,}
+
Matches i or more sequences, inclusive
+
+
+
(regexp)
+
Groups the inner regexp. Allows it to be followed by a postfix operator, or can be used for back references (see below)
+
+
+
regexp1|regexp2
+
Matches regexp1 or regexp2, | is used to separate alternatives
+
+
+
+
+
The expressions ‘[list]’ and ‘[^list]’ are known as bracket expressions in GNU Awk. They represent a single character chosen from the list.
+
To include the characters ‘\’, ‘]’, ‘-’ or ‘^’ in the list precede them with a backslash.
+
The character classes like ‘[:alnum:]’ were dealt with in episode 5. These can only be used in bracket expressions and represent a single character. They are able to deal with extended character data (such as Unicode) whereas the older list syntax cannot.
+
There are a number of GNU Awk (gawk) specific regular expression operators, some of which we touched on in the recap.
+
+
\s
+
matches any whitespace character. Equivalent to the ‘[:space:]’ character class in a bracket expression (i.e. ‘[[:space:]]’).
+
+
\S
+
matches any character that is not whitespace. Equivalent to ‘[^[:space:]]’.
+
+
\w
+
matches any word character. A word character is any letter or digit or the underscore character.
+
+
\W
+
matches any non-word character.
+
+
\<
+
(backslashless than) matches the empty string at the beginning of a word.
+
+
\>
+
(backslashgreater than) matches the empty string at the end of a word.
+
+
\y
+
(backslashy) matches a word boundary; that is it matches if the character to the left is a word character and the character to the right is a non-word character, or vice-versa.
+
+
\B
+
Matches everywhere but on a word boundary; that is it matches if the character to the left and the character to the right are either both word characters or both non-word characters. This is essentially the opposite of ‘\y’.
+
+
\`
+
(backslashbackquote) matches the empty string at the beginning of a string. This is essentially the same as the ‘^’ (circumflex or caret) operator, which means the beginning of the current line ($0), or the start of a string.
+
+
\’
+
(backslashsingle quote) matches the empty string at the end of a string. This is essentially the same as the ‘$’ (dollar sign) operator, which means the end of the current line ($0), or the end of a string.
+
+
+
GNU Awk can behave as if it is traditional Awk, or will operate only in POSIX mode or can turn on and off other regular expression features. There is a discussion of this in the GNU Awk manual, particularly in the Regular Expression section.
+
Functions
+
The details of the built-in functions we will be looking at here can be found in the GNU Manual in the String-Manipulation Functions section.
+
The sub function
+
The sub function has the format:
+
sub(regexp, replacement [, target])
+
The first argument regexp is a regular expression. This usually means it is enclosed in ‘//’ delimiters1.
+
The second argument replacement is a string to be used to replace the text matched by the regexp. If this contains a ‘&’ character this refers to the text that was matched.
+
The optional third argument target is the name of the string or field that will be changed by the function. It has to be an existing string variable or field since sub changes it in place. If the target is omitted then field ‘$0’ (the whole input line) is modified.
+
The purpose of the sub function is to search the string in the target variable for the longest leftmost match with the regexp argument. This is replaced by the replacement argument.
+
The function returns the number of changes made (which can only be zero or 1).
This time we used the example file file1.txt and replaced all vowels with question marks, then captured the number changed. We printed the result and the number of changes.
+
The gensub function
+
This function is different from the other two, and has been added to GNU Awk later than sub and gsub2:
+
gensub(regexp, replacement, how [, target])
+
First argument: regexp
+
This is a regular expression (usually a regexp constant enclosed in slashes). Any of the regular expression operators seen in this and the last episode can be used. In particular, regular expressions enclosed in parentheses can be used here. (Similar features were described in the “Learning sed” series).
+
Second argument: replacement
+
In this argument, which is a string, the text to use for replacement is defined. This can also contain back references to text “captured” by the parenthesised expressions mentioned above.
+
The back references consist of a backslash followed by a number. If the number is zero then the it refers to the entire regular expression and is equivalent to the ‘&’ character. Otherwise the number may be 1 to 9, referring to a parenthesised group.
+
Because of the way Awk processes strings, it is necessary to double the backslash in this argument. For instance, to refer to parenthesised component number one the string must be “\\1”.
+
Third argument: how
+
This is a string which should contain ‘G’, ‘g’ or a number.
+
If ‘G’ or ‘g’ (global) it means that all matches should be replaced as specified.
+
If it is a number then it indicates which particular numbered match and replacement should be performed. It is not possible to perform multiple actions with this feature.
+
Fourth argument: target
+
If this optional argument is omitted then the field ‘$0’ is used. Otherwise the argument can be a string, a variable (containing a string) or a field.
+
The target is not changed in situ, unlike with sub and gsub. The function returns the changed string instead.
Here gensub matches every occurrence of ‘a’, replacing it with capital ‘A’ globally. Note how we print the result of the gensub call. Note also that ‘$0’ has not changed as can be seen when we print it with the second print statement.
In this example we have requested that only the first match be replaced. There is no way to replace anything other than all matches or just one using the how argument.
This example shows another way to replace matching letters. In this case we have specified only ’a’s which are not at a word boundary. This is not an ideal solution.
This example shows the use of regular expression groups and back references. The three groups are:
+
+
A single “word” character
+
One or more “word” characters
+
Zero or more non-“word” characters
+
+
Having matched these items (e.g. ‘H’, ‘acker’ and space for the first word), they are replaced by the second group (‘acker’), the first group (‘H’), the letters ‘ay’ and the third group (space). This is repeated throughout the target.
+
Since the target text consists of three words the regular expression matches three times (since argument how was a ‘g’) and the words are all processed the same way - into primitive “Pig Latin”.
+
$ awk 'BEGIN{print gensub(/(\w)(\w+)(\W*)/,"\\2\\1ay\\3","3","Hacker Public Radio")}'
+Hacker Public adioRay
+
This example is a variant of the previous one. In this case the entire Awk script is in a ‘BEGIN’ rule, and the target is a string constant. Since argument how is the number 3 then only the third match is replaced.
+
Example script
+
I have included a longer example using a new test datafile. The example Awk script is called contacts.awk and the data file is contacts.txt. They are included with this show and links to them are listed below.
+
The test data was generated on a site called “Mockaroo”, which was used to generate CSV data. The Vim plugin csv.vim was used to reformat this into the final format with the :ConvertData function. Here are the first 8 lines from that file:
+
Name: Robin Richardson
+First: Robin
+Last: Richardson
+Email: rrichardson0@163.com
+Gender: Female
+
+Name: Anne Price
+First: Anne
+
Here is the entire awk script which can be run thus:
#!/usr/bin/awk -f
+
+#
+# Define separators
+#
+BEGIN{
+ #
+ # The field separator is a newline
+ #
+ FS="\n"
+
+ #
+ # The record separator is two newlines since there's a blank line between
+ # contacts.
+ #
+ RS="\n\n"
+
+ #
+ # On output write a line of hyphens on a new line
+ #
+ ORS="\n----\n"
+}
+
+{
+ #
+ # Show where the "beginning of buffer" is
+ #
+ sub(/\`/,"[")
+
+ #
+ # Show where the "end of buffer" is
+ #
+ sub(/\'/,"]")
+
+ #
+ # Show where the start and end of "line" are
+ #
+ sub(/^/,"{")
+ sub(/$/,"}")
+
+ #
+ # Print the buffer with a record number and a field count
+ #
+ print"("NR"/"NF")",$0
+}
+
The script changes the default separators in order to treat the entire block of lines making up a contact as a single Awk “record”. Each field is separated from the next with a newline, and each “record” is separated from the next by two newlines. For variety when printing the output “records” are separated by a newline, four hyphens and a newline.
+
As it processes each “record” the script marks the positions of four boundaries using some of the regular expression operators we have seen in this episode. It prints the “record” ($0) preceding it by the record number and the number of fields.
+
A sample of the first 8 lines of the output looks like this:
+
(1/5) {[Name: Robin Richardson
+First: Robin
+Last: Richardson
+Email: rrichardson0@163.com
+Gender: Female]}
+----
+(2/5) {[Name: Anne Price
+First: Anne
+
Warning for sed users
+
GNU awk is related to GNU sed, which was covered in the series “Learning sed”. If you listened to that series there is unfortunately some potential for confusion as we learn about GNU Awk. Many of the regular expression operators described for GNU sed are the same as those used in GNU Awk except that sed uses a backslash in front of some and Awk does not. Examples are ‘\+’ and ‘\?’ in sed versus ‘+’ and ‘?’ in Awk.
+
Summary
+
This episode covered:
+
+
A recap of the last episode
+
+
Correcting some small errors in the list of regular expression operators
+
+
More detail of regular expression operators
+
A detailed description of the functions sub, gsub and gensub with examples
+
A more complex example Awk script
+
A warning about the differences in regular expressions between sed and Awk
This is a “Regexp Constant”, but there is another form the “Computed Regexp”, which is discussed in the GNU Manual.↩
+
As a possible point of interest, I have a copy of the “GAWK Manual” (as it was called), dated 1992, version 0.14, which does not contain gensub.↩
+
+
+
+
+
+
+
diff --git a/eps/hpr2238/hpr2238_full_shownotes.pdf b/eps/hpr2238/hpr2238_full_shownotes.pdf
new file mode 100755
index 0000000..042f11d
Binary files /dev/null and b/eps/hpr2238/hpr2238_full_shownotes.pdf differ
diff --git a/eps/hpr2245/hpr2245_clean_csv_tags b/eps/hpr2245/hpr2245_clean_csv_tags
new file mode 100755
index 0000000..5ef5a89
--- /dev/null
+++ b/eps/hpr2245/hpr2245_clean_csv_tags
@@ -0,0 +1,301 @@
+#!/usr/bin/env perl
+#===============================================================================
+#
+# FILE: clean_csv_tags
+#
+# USAGE: ./clean_csv_tags
+#
+# DESCRIPTION: Make sure tags in the eps.tags field of the HPR database
+# conform to CSV format.
+#
+# OPTIONS: ---
+# REQUIREMENTS: ---
+# BUGS: ---
+# NOTES: ---
+# AUTHOR: Dave Morriss (djm), Dave.Morriss@gmail.com
+# VERSION: 0.0.1
+# CREATED: 2017-01-30 15:32:04
+# REVISION: 2017-01-30 17:17:51
+#
+#===============================================================================
+
+use 5.010;
+use strict;
+use warnings;
+use utf8;
+
+use Carp;
+use Getopt::Long;
+use Config::General;
+use Text::CSV;
+use List::MoreUtils qw{uniq};
+use SQL::Abstract;
+use DBI;
+
+use Data::Dumper;
+
+#
+# Version number (manually incremented)
+#
+our $VERSION = '0.0.1';
+
+#
+# Script and directory names
+#
+( my $PROG = $0 ) =~ s|.*/||mx;
+( my $DIR = $0 ) =~ s|/?[^/]*$||mx;
+$DIR = '.' unless $DIR;
+
+#-------------------------------------------------------------------------------
+# Declarations
+#-------------------------------------------------------------------------------
+#
+# Constants and other declarations
+#
+my $basedir = "$ENV{HOME}/HPR/Database";
+my $configfile = "$basedir/.hpr_db.cfg";
+
+my ( $dbh, $sth1, $h1, $rv );
+my ( %eps_tags, %diffs );
+
+#
+# Enable Unicode mode
+#
+binmode STDOUT, ":encoding(UTF-8)";
+binmode STDERR, ":encoding(UTF-8)";
+
+#
+# Load configuration data
+#
+my $conf = Config::General->new(
+ -ConfigFile => $configfile,
+ -InterPolateVars => 1,
+ -ExtendedAccess => 1,
+);
+my %config = $conf->getall();
+
+#-------------------------------------------------------------------------------
+# Options and arguments
+#-------------------------------------------------------------------------------
+#
+# Process options
+#
+my %options;
+Options( \%options );
+
+Usage() if ( $options{'help'} );
+
+#
+# Collect options
+#
+my $verbose = ( defined( $options{verbose} ) ? $options{verbose} : 0 );
+my $dry_run = ( defined( $options{'dry-run'} ) ? $options{'dry-run'} : 1 );
+
+#-------------------------------------------------------------------------------
+# Connect to the database
+#-------------------------------------------------------------------------------
+my $dbhost = $config{database}->{host} // '127.0.0.1';
+my $dbport = $config{database}->{port} // 3306;
+my $dbname = $config{database}->{name};
+my $dbuser = $config{database}->{user};
+my $dbpwd = $config{database}->{password};
+$dbh = DBI->connect( "dbi:mysql:host=$dbhost;port=$dbport;database=$dbname",
+ $dbuser, $dbpwd, { AutoCommit => 1 } )
+ or croak $DBI::errstr;
+
+#
+# Enable client-side UTF8
+#
+$dbh->{mysql_enable_utf8} = 1;
+
+#-------------------------------------------------------------------------------
+# Collect and process the id numbers and tags from the 'eps' table
+#-------------------------------------------------------------------------------
+%eps_tags = %{ collect_eps_tags( $dbh, $verbose ) };
+
+#-------------------------------------------------------------------------------
+# Turn all the saved and cleaned tags into CSV strings again and save them
+# back to the database. TODO: find differences and only write those back
+#-------------------------------------------------------------------------------
+#
+# Force quoting everywhere
+#
+my $csv = Text::CSV_XS->new( { always_quote => 1 } );
+
+my $status;
+
+#
+# Loop through the hash in order of show number
+#
+for my $id ( sort keys %eps_tags ) {
+ #
+ # Put the array fields back together
+ #
+ $status = $csv->combine( @{ $eps_tags{$id} } );
+
+ #
+ # Write them to the database
+ #
+ $dbh->do( q{UPDATE eps SET tags = ? WHERE id = ?},
+ undef, $csv->string(), $id );
+ if ( $dbh->err ) {
+ warn $dbh->errstr;
+ }
+
+}
+
+exit;
+
+#=== FUNCTION ================================================================
+# NAME: collect_eps_tags
+# PURPOSE: Collects the tags from the eps.tags field
+# PARAMETERS: $dbh Database handle
+# $verbose Verbosity level
+# RETURNS: A reference to the hash created by collecting all the tags
+# DESCRIPTION: Read the 'id' and tags' fields from the database. Parse the
+# tags as CSV data, flagging any errors. Trim each one and store
+# them in a hash keyed on the id number. The list of tags is
+# stored as an array in sorted order after ensuring therre are
+# no duplicates. At verbosity levels over 1 the entire hash is
+# printed.
+# THROWS: No exceptions
+# COMMENTS: None
+# SEE ALSO: N/A
+#===============================================================================
+sub collect_eps_tags {
+ my ( $dbh, $verbose ) = @_;
+
+ my ( $status, @fields, %hash );
+ my ( $sth, $h );
+
+ #
+ # For parsing the field as CSV
+ #
+ my $csv = Text::CSV_XS->new;
+
+ #
+ # Query the eps table for all the id and tags
+ #
+ $sth = $dbh->prepare(
+ q{SELECT id,tags FROM eps
+ WHERE length(tags) > 0
+ ORDER BY id}
+ ) or die $DBI::errstr;
+ if ( $dbh->err ) {
+ warn $dbh->errstr;
+ }
+
+ $sth->execute;
+ if ( $dbh->err ) {
+ warn $dbh->errstr;
+ }
+
+ #
+ # Loop through what we got
+ #
+ while ( $h = $sth->fetchrow_hashref ) {
+ #
+ # Parse the tag list
+ #
+ $status = $csv->parse( $h->{tags} );
+ unless ($status) {
+ #
+ # Report and skip any errors
+ #
+ print "Parse error on episode ", $h->{id}, "\n";
+ print $csv->error_input(), "\n";
+ next;
+ }
+ @fields = $csv->fields();
+
+ next unless (@fields);
+
+ #
+ # Trim all tags (don't alter $_ when doing it)
+ #
+ @fields = map {
+ my $t = $_;
+ $t =~ s/(^\s+|\s+$)//g;
+ $t;
+ } @fields;
+
+ #
+ # De-duplicate
+ #
+ @fields = uniq(@fields);
+
+ #print "$h->{id}: ",join(",",@fields),"\n";
+
+ #
+ # Save the id and its tags, sorted for comparison
+ #
+ $hash{ $h->{id} } = [ sort @fields ];
+
+ }
+
+ #print Dumper(\%hash),"\n";
+
+ #
+ # Dump all id numbers and tags if the verbose level is high enough
+ #
+ if ( $verbose >= 2 ) {
+ print "\nTags collected from the 'eps' table\n\n";
+ foreach my $id ( sort { $a <=> $b } keys(%hash) ) {
+ printf "%04d: %s\n", $id, join( ",", @{ $hash{$id} } );
+ }
+ }
+
+ return \%hash;
+
+}
+
+#=== FUNCTION ================================================================
+# NAME: Usage
+# PURPOSE: Display a usage message and exit
+# PARAMETERS: None
+# RETURNS: To command line level with exit value 1
+# DESCRIPTION: Builds the usage message using global values
+# THROWS: no exceptions
+# COMMENTS: none
+# SEE ALSO: n/a
+#===============================================================================
+sub Usage {
+ print STDERR <
+
+
+
+
+
+
+ Managing tags on HPR episodes - 1 (HPR Show 2245)
+
+
+
+
+
+
+
+
+
Managing tags on HPR episodes - 1 (HPR Show 2245)
+
Dave Morriss
+
+
+
+
+
+
+
Table of Contents
+
+
+
Introduction
+
We have been collecting and storing tags for new HPR shows for a while now with the intention of eventually offering a search interface. In addition, a number of contributors, including myself have been adding tags (and summaries), to shows that do not have them, since August 2015. There is still a way to go, but we’re making progress. At the time of writing (2017-01-31) 56.29% (1248) of all HPR shows (2217) have tags.
+
In recent times the way in which we should use these tags has been discussed. In show 2035 on 2016-05-20 droops suggested:
+
+
The website, which is a lot of work, needs to have related shows listed on each individual show’s page. This will take a tag system and someone to tag all of the almost uncountable previous episodes.
+
+
This episode begins a discussion about some of the ways that tags can be stored, managed and accessed efficiently in the HPR database.
+
I started planning a show about this subject in the summer of 2016, and the amount of information I have accumulated has grown since then. There is now quite a lot, so I am going to split what was originally going to be one show into three.
+
The subject becomes quite technical in the later shows, discussing database design techniques, and all three of the shows contain examples of database queries and scripts. If you are not interested in this subject than feel free to skip past. However, you might find this first episode more palatable, and any thoughts you might have on the subject would be appreciated.
+
Previous discussions
+
There have been discussions in the past about whether we should use a database at all to hold HPR shows, and whether a static site might be better for our needs. This has been motivated by security considerations amongst other things. Such a static site would probably be generated from a database, since there are a number of instances where the site needs to contain computed values such as the number of shows in a series, or the number recorded by a given host, and so forth, and databases are good at this.
+
If, as now, a database is being used then there are differing opinions on how it should be put together. Database administration skills are quite specialised and the concepts behind databases can be a little difficult to comprehend when first encountered.
+
One view has been that using any of the advanced features of the chosen database system should be avoided because doing so will be too complicated for future volunteer HPR administrators to maintain. The types of capabilities that would be desirable include:
+
+
Allowing there to be more than one host associated with a show.
+
Allowing a show to be associated with more than one series.
+
Implementing a tag system.
+
+
On the other hand, the view has certainly been expressed that using a relational database like a collection of spreadsheets (as now) is a woeful misuse of a powerful tool.
+
I will give examples of some of these issues in these episodes.
+
Tags and their potential uses
+
The question needs to be asked: “Why have tags at all?”
+
1. Relationships between shows
+
As suggested by droops, if a particular show has tags associated with it, then all of the shows which share any of the same tags should be related and should be linked (or linkable) from that show somehow. Displaying such relationships would be helpful in finding other shows worth listening to.
+
2. Listing shows by tags
+
It might also be useful to list all the shows that have a given tag, perhaps as a sort of master show index. I am envisaging something like the index of a book with a tag on the left and a show number link on the right of the page. Alternatively a table listing tags with show number and title to the right. Of course, with a well-populated tag system, there will often be several shows which are tagged with a given tag.
+
3. Tag queries
+
Many tagging systems allow complex queries such as:
+
interview and ( oggcamp or fscons )
+
This means all shows tagged ‘interview’ and either ‘oggcamp’ or ‘fscons’. Although implementing this at the database level is possible, I suspect that such a feature would require Javascript on the front end to make a useable interface on the website, so it may not be implemented for some time (if at all).
+
On the other hand, doing this through an URL-based interface may be simpler.
+
Database design
+
Now let us look at how a tag system could be implemented in the database.
+
The HPR database currently holds tags as a comma-separated list - the exact same way we request them to be entered into the form when an episode is being uploaded. We do not currently reformat these in any standard way, so although they are comma-separated entities they do not conform to the Comma Separated Variable standard (see RFC4180).
+
The reason why using such a standard would be desirable is, for example, there may be a need to add a tag which contains a comma. For instance, the tags for a show might be:
+
TED
+Technology, Education and Design
+
Here the expectation was that “Technology, Education and Design” was a single tag, not the two tags it might be interpreted as.
+
The storage currently allocated for the tag list is 200 characters. This is potentially quite restrictive since some very long shows would benefit from more tags than might fit this field. The longest tag list currently in the database is 196 characters.
+
So, the question is what would be the best way to store tags? I will look at the current solution and two other different solutions and comment about them. This episode will concentrate on the current way of doing things and the next two will look at the other suggested alternatives.
+
As a disclaimer, I am not a trained database designer. I have worked with databases a fair bit over the past 15 years or so but am largely self-taught. Listeners to this episode may well have better ideas for doing what is needed, and will probably have had proper training and much more experience. If so, please add ideas and suggestions in the comments, send email to the HPR list, or email me directly.
+
1. Tags as a comma-separated list
+
This is the current solution. The ‘tags’ field is a column of the ‘eps’ table which holds episode details in the HPR database.
+
Advantages
+
+
This is a very simple to store such tags. We just place words or phrases in a comma-separated list and add them to the tags field for an episode.
+
It is very simple to maintain. We just add tags to the field or remove them – with relative ease.
+
+
Disadvantages
+
+
The current field is small and can easily be exceeded when dealing with long and complex shows that would benefit from a lot of tags (such as the New Year shows where people tend to list things such as favourite software or hardware which could be tagged).
+
The tags should be pre-processed to ensure they conform to CSV format before storage. For instance, we would want tags with commas in them like the earlier example “Technology, Education and Design” to be enclosed in quotes.
+
This format is wasteful since the same tags are duplicated throughout episodes (there are currently 84 instances of “Linux” in the tags for example).
+
This format is very difficult to search efficiently (see below). In database terms searching requires each ‘tags’ field to be examined with a string search which is expensive and scales badly.
+
Databases are usually designed to use indexes to optimise searches, and this cannot be done here (or is beyond my skill level).
+
+
Searching
+
As mentioned, there is a database table called ‘eps’ which holds episode details such as the show number, release date and episode title, and the tags are held in a column of this table.
+
Finding shows with a given tag
+
If we look at the show droops recorded called “Building Community”, episode 2035, this has a single tag, the word ‘community’. Say we want to find other shows with this tag – we might use the following query:
+
SELECT id,date,title FROM eps WHERE tags LIKE '%community%';
+
+
+
Note: This is SQL, Structured Query Language. ‘SELECT’ initiates a query, where information is retrieved from the database. The name of the table being queried follows ‘FROM’, and the fields to display follow the ‘SELECT’. The ‘WHERE’ part introduces the segment of the query which filters out a subset of the table. Here we are checking the ‘tags’ field, looking for “community”.
+In short, the query finds all rows in the ‘eps’ table with “community” in the ‘tags’ field.
+
+
+
We cannot use the equality operator in the ‘WHERE’ clause, since we’re looking for the word ‘community’ in a list of other words, so we use the wildcard capabilities of SQL where ‘%’ is the wildcard used in conjunction with the ‘LIKE’ operator. We are looking for any tag field containing ‘community’. The comparisons are not case-sensitive.
+
Running this query currently returns 95 rows, since the Community News shows are all tagged ‘community news’ and this approach does not differentiate between ‘community’ as a tag or as part of a tag.
+
We could try this:
+
SELECT id,date,title FROM eps WHERE tags LIKE '%,community,%';
+
This time we get three hits but it does not include show 2035 because there are no commas in the tags field in this case, since there is just one tag.
+
We could try covering all possibilities:
+
SELECT id,date,title FROM eps WHERE tags LIKE '%community%'
+ OR tags LIKE '%,community%' OR tags LIKE '%community,%'
+ OR tags LIKE '%,community,%';
+
However, this produces 95 results again since it doesn’t exclude ‘community news’ and is getting quite ugly.
+
The database server, MariaDB, can handle (limited) regular expressions so we might try that as a solution:
+
SELECT id,date,title FROM eps WHERE tags REGEXP '(^|,)community(,|$)';
+
Here the regular expression matches ‘community’ at the start of the field, or after a comma, as well as at the end of the field or before a comma.
+
This now returns five matches. However, the tag strings often contain spaces since they have not yet been cleaned up, so it is necessary to enhance the expression to avoid such problems:
+
+
MariaDB [hpr_hpr]> SELECT id,date,title,tags FROM eps WHERE tags REGEXP '(^|,) *community *(,|$)';
++------+------------+--------------------------------------------------------------------+---------------------------------------------------------------+
+| id | date | title | tags |
++------+------------+--------------------------------------------------------------------+---------------------------------------------------------------+
+| 1 | 2007-12-31 | Introduction to HPR | hpr, twat, community |
+| 947 | 2012-03-20 | Presentation by Jared Smith at the Columbia Area Linux Users Group | Fedora,community |
+| 1000 | 2012-05-31 | Episode 1000 | HPR,community,congratulations |
+| 1024 | 2012-07-05 | Episode 1024 | HPR,community,anniversary |
+| 1509 | 2014-05-15 | HPR Needs Shows | HPR, shows, request, call to action, community, contribute |
+| 1913 | 2015-12-02 | The Linux Experiment | linux, the linux experiment, community |
+| 2008 | 2016-04-13 | HPR needs shows to survive. | HPR,community,shows,call to action,contribute |
+| 2035 | 2016-05-20 | Building Community | community |
+| 2077 | 2016-07-19 | libernil.net and self hosting for friends and family | gnu, linux, networking, community, servers, services, commons |
++------+------------+--------------------------------------------------------------------+---------------------------------------------------------------+
+9 rows in set (0.01 sec)
+
+
Here we get nine matches, which is correct.
+
It is also possible to use the regular expression to find multiple tags. In this example we look for the string ‘hpr’ or ‘community’ and get 14 matches:
+
MariaDB [hpr_hpr]> SELECT id,date,title,tags FROM eps WHERE tags REGEXP '(^|,) *(hpr|community) *(,|$)';
++------+------------+--------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------+
+| id | date | title | tags |
++------+------------+--------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------+
+| 1 | 2007-12-31 | Introduction to HPR | hpr, twat, community |
+| 947 | 2012-03-20 | Presentation by Jared Smith at the Columbia Area Linux Users Group | Fedora,community |
+| 2195 | 2016-12-30 | All you need to know when uploading a show | HPR |
+| 1000 | 2012-05-31 | Episode 1000 | HPR,community,congratulations |
+| 1024 | 2012-07-05 | Episode 1024 | HPR,community,anniversary |
+| 1371 | 2013-11-04 | The Lost Banner of HPR | hpr,banner |
+| 1509 | 2014-05-15 | HPR Needs Shows | HPR, shows, request, call to action, community, contribute |
+| 1726 | 2015-03-16 | 15 Excuses not to Record a show for HPR | hpr,podcasting,tips,techniques,kw,knightwise,excuses |
+| 1818 | 2015-07-22 | Review of HPR's Interview Recorder: Zoom H1 | Zoom H1, microphone, recording, review, DVR, digital voice recorder, tutorial, getting started, guide, howto, HPR |
+| 1877 | 2015-10-13 | Recording HPR on the fly on your Android phone | android, hpr, audio, recording |
+| 1913 | 2015-12-02 | The Linux Experiment | linux, the linux experiment, community |
+| 2008 | 2016-04-13 | HPR needs shows to survive. | HPR,community,shows,call to action,contribute |
+| 2035 | 2016-05-20 | Building Community | community |
+| 2077 | 2016-07-19 | libernil.net and self hosting for friends and family | gnu, linux, networking, community, servers, services, commons |
++------+------------+--------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------+
+14 rows in set (0.00 sec)
+
+
If however we want to find ‘hpr’ and ‘community’ then we have to use two regular expressions. This time there are only 5 matches, as might be expected:
+
MariaDB [hpr_hpr]> SELECT id,date,title,tags FROM eps WHERE tags REGEXP '(^|,) *(community) *(,|$)' AND tags REGEXP '(^|,) *(hpr) *(,|$)';
++------+------------+-----------------------------+------------------------------------------------------------+
+| id | date | title | tags |
++------+------------+-----------------------------+------------------------------------------------------------+
+| 1 | 2007-12-31 | Introduction to HPR | hpr, twat, community |
+| 1000 | 2012-05-31 | Episode 1000 | HPR,community,congratulations |
+| 1024 | 2012-07-05 | Episode 1024 | HPR,community,anniversary |
+| 1509 | 2014-05-15 | HPR Needs Shows | HPR, shows, request, call to action, community, contribute |
+| 2008 | 2016-04-13 | HPR needs shows to survive. | HPR,community,shows,call to action,contribute |
++------+------------+-----------------------------+------------------------------------------------------------+
+5 rows in set (0.00 sec)
+
+
MariaDB offers a function ‘find_in_set’ which parses comma-separated lists, so it is possible to do this:
+
SELECT id,date,title FROM eps WHERE find_in_set('community',tags) > 0;
+
However, this only returns five matches, because of the extra spaces before the tags, so the regular expression method seems best unless the tags have been significantly cleaned up.
+
Finding shows related to a given show using tags
+
How then could a query be constructed to do the thing that droops suggested: given a show number, extract its tags and use them to perform a search for other shows matching each tag?
+
I can think of ways this could be scripted, though it would take more than one query
+
+
one query to get the target show
+
the script would need to parse the CSV data in the tags field for the show
+
then queries would need to be issued for each of the tags so discovered
+
+
I cannot think of any way in which this could be performed as a single SQL query.
+
Conclusion
+
I think that using the comma separated tag field for anything but the very simplest types of queries is undesirable. Even if we continued to use this method, we would need to:
+
+
Increase the size of the field (e.g. ‘alter table eps modify tags varchar(400) null;’)
+
Ensure the contents conform to CSV standards (I have written a fairly simple Perl script called ‘clean_csv_tags’ to do this, which is available with this show)
+
Ideally, perform checks on the tags for spelling, unnecessary plurals, spurious punctuation and so on (my script could probably be modified to do this)
+
+
Notwithstanding these points, my feeling is that we should look for a better solution.
+
Epilogue
+
A couple of requests regarding tags:
+
+
Please include tags when uploading your shows. Just add a few keywords to the tags field reflecting what your show was about or topics you spoke about.
+
Note: more contributions to the project to add missing tags will always be welcome! Visit the page on the HPR website listing missing summaries and tags to find out how you could help.
Perl script to clean the tags field in the database: clean_csv_tags
+
+
+
+
+
+
+
+
diff --git a/eps/hpr2255/hpr2255_full_shownotes.epub b/eps/hpr2255/hpr2255_full_shownotes.epub
new file mode 100755
index 0000000..17f84f4
Binary files /dev/null and b/eps/hpr2255/hpr2255_full_shownotes.epub differ
diff --git a/eps/hpr2255/hpr2255_full_shownotes.html b/eps/hpr2255/hpr2255_full_shownotes.html
new file mode 100755
index 0000000..7849a99
--- /dev/null
+++ b/eps/hpr2255/hpr2255_full_shownotes.html
@@ -0,0 +1,363 @@
+
+
+
+
+
+
+
+ The Good Ship HPR (HPR Show 2255)
+
+
+
+
+
+
+
+
+
The Good Ship HPR (HPR Show 2255)
+
Dave Morriss
+
+
+
+
+
+
+
Table of Contents
+
+
+
+
+
Hacker Public Radio
+
What is it?
+
The podcast called Hacker Public Radio (HPR) is an amazing phenomenon. It has been providing an episode a day every weekday for years, and these episodes originate from the community.
+
I heard someone refer to HPR as “Crowd Sourced” which seemed like a good way of describing things. It is an open access resource which is managed under various Creative Commons licences, usually CC-BY-SA.
+
The content is very broad in scope. Anything “of interest to Hackers” is acceptable, which is interpreted in a wide variety of ways.
+
Access to shows is open to all through the HPR site, where shows back to episode 1 can be browsed, notes read, etc. There are feeds which propagate various updates: to shows, series, comments and email. Current shows are archived to the Internet Archive (archive.org) within a few days of appearing in the main feed, and older shows are gradually being archived this way with the intention of eventually storing everything there.
As you can see, if you examine the details on the website statistics page the predecessor of HPR started more than 11 years ago as “Today With A Techie”, transforming into “Hacker Public Radio” over 9 years ago.
+
Started: 11 years, 4 months, 12 days ago (2005-10-10)
+Renamed HPR: 9 years, 1 months, 20 days ago (2007-12-31)
+
In the earlier days the frequency of show release was not the predictable 5 per week, every weekday, that it is now. There were gaps, sometimes of several days, and occasionally shows came out on the weekend. Stability was achieved in October 20121 and there have been no gaps since then!
+
There are currently 280 hosts who have contributed shows at some point in the history of HPR, and at the time of writing in February 2017 show number 2230 has been released. The number of episodes and hosts will be greater when the episodes from “Today With A Techie” are incorporated into the archive.
+
The Hacker Public Radio experiment has been very successful over the years, but there is a certain fragility in the way it works, and that is the reason for doing this episode.
+
The Problem
+
The big gotcha is that HPR needs a steady supply of contributions to keep releasing one show every weekday, and sometimes the supply dries up.
+
HPR needs approximately 52*5=260 shows per year (which coincidentally means that one show from each registered host per year would easily clinch the deal!). However, the rate of supply is not reliable. Sometimes there are plenty of shows in the queue, but at other times the supply dwindles and the future of HPR looks very much in doubt.
+
There is a small buffer of emergency shows held for use if there is a gap in the schedule 24 hours prior to release. This emergency queue currently contains 8 shows, but there has been debate over whether this is a good idea.
+
It was April 2016 when I started planning this episode, and HPR had just been through a bit of a crisis in the supply of shows. The queue was down to the last few shows and was almost about to run out. An appeal for shows went out and it was great to see how people stepped up and provided them.
+
Then things went quiet and the state of shortage started to loom again, though it did not become as severe. It took another appeal on a Community News show to restart the flow. Then another shortage happened towards the middle of August, and so it went on – and tends to go on. You only need to look at the sawtooth shape of the graph on the calendar page to see what I mean.
+
+
This is a feature of what has been described as the “Community Internet Radio” model that defines HPR:
+
+
“feast” followed by “famine” followed by “feast” and so on.
+
+
In a recent conversation about this problem I likened the situation to a leaky ship - “The Good Ship HPR”. The ship is always in danger of sinking unless we keep emptying the water out by bailing. There are at least 280 crew members who have buckets and can help with emptying the water, but there are also many more passengers who could grab a bucket and join in! Everyone needs to take a turn: it’s not reasonable to expect just a few crew members to keep the ship from sinking beneath the waves.
+
+
As mentioned earlier, in the past 12 months (12th April 2016 to be precise) HPR reached the point where the queue was almost completely out of shows. After calls for help from Ken on the mailing list and in the form of a show, many people stepped up and made a contribution bringing things back to a much healthier state.
+
This is wonderful, but it’s not the way to run “The Good Ship HPR”. We can’t have water washing over the decks before we start bailing! It needs to be a constant process.
+
Statistics
+
Let’s look at some recent statistics (collected at the time of writing: 2017-02-19):
+
+
In the past 12 months 67 hosts (some new) have contributed 260 shows.
+
In this period 22 new hosts have done their first show and have gone on to contribute 56 shows in total between them.
+
+
The following table breaks down the contributions:
+
+
+
+
+
+
+
+
+>10 shows
+
+
+between 5 and 10
+
+
+<5 shows
+
+
+
+
+Hosts
+
+
+5
+
+
+10
+
+
+52
+
+
+
+
+Shows
+
+
+87
+
+
+73
+
+
+100
+
+
+
+
+Percentage
+
+
+33.46%
+
+
+28.08%
+
+
+38.46%
+
+
+
+
The table shows the number of episodes contributed by the 5 people who each did over 10 episodes in the last year: 87. This was 33.46% of the episodes needed. The 10 people contributing between 5 and 10 episodes recorded 73 episodes (28.08%) and the remaining 100 (38.46%) came from 52 people contributing fewer than 5 shows.
+
+
+
The table below shows the number of new hosts joining over the lifetime of HPR. Remember that HPR was created on the 31st December 2007, so there was not much time for new hosts to join that year!
+
+
+
+Calendar year
+
+
+Number of new hosts
+
+
+
+
+2007
+
+
+1
+
+
+
+
+2008
+
+
+47
+
+
+
+
+2009
+
+
+19
+
+
+
+
+2010
+
+
+21
+
+
+
+
+2011
+
+
+49
+
+
+
+
+2012
+
+
+32
+
+
+
+
+2013
+
+
+39
+
+
+
+
+2014
+
+
+21
+
+
+
+
+2015
+
+
+27
+
+
+
+
+2016
+
+
+22
+
+
+
+
+2017
+
+
+2
+
+
+
+
+
+
+
+Total: 280
+
+
+
+
Solutions?
+
There are no simple solutions to this problem. People are contributing episodes for HPR, and the project is still alive, but there are factors which make a steady rate of contributions unreliable.
+
This problem of unreliability has been occurring for as long as the HPR project has existed, and nobody has found a solution yet.
+
In this section I am proffering some thoughts, ideas and comments to try and raise awareness and request suggestions from the community.
+
Some assumptions and caveats
+
I have been guilty of various incorrect assumptions about where contributions come from. I thought it might be useful to start by clarifying a few points.
+
+
There is a process of what might be called attrition amongst the contributors to HPR. A proportion of the host population have done just one show and then vanished. Others have done more than one show but have then stopped producing shows.
+
Although the process of recruiting new hosts is important, especially considering the attrition just mentioned, HPR continues to function because of repeated contributions from hosts.
+
Encouraging existing hosts to continue contributing shows would help considerably to solve the shortages.
+
HPR needs listeners, and its visibility needs to be reasonably high. In terms of the supply of episodes contributors are more important than listeners though. Ideally, all listeners would be contributors!
+
Not all contributors necessarily listen to HPR. Some might use it as a springboard to doing a podcast of their own. Others might just like the HPR concept and want to make their mark.
+
+
Raising the profile of HPR
+
+
The more HPR is known about, the more listeners there will be. The more listeners there are the more hosts there are likely to be. The more new hosts we get the more shows there will be (though see the caveats above)!
+
Suggestions were made by droops in show 2035. Here are the points he made:
+
+
Transcribe shows. This is a large task.
+
Get well-known podcasters to guest host or advertise HPR.
+
Interview more people who will mention the interview on their blog or social media
+
Offer a phone app to simplify the recording and submission of shows
+
Collect more topics through a survey or a submission form
+
Generate host photos with show titles for social media
+
Make a video explaining what HPR is
+
Each show on the website should link to related shows (using tags). The tag system exists and is being populated.
+
More shows about cool software, books or documentaries
+
An HPR shop with stickers, t-shirts and tote bags
+
+
+
+
+
Need for new hosts
+
+
After sampling the HPR database it was revealed that quite a number of registered hosts (87) in the period up to the end of 2015 contributed one show and have never been seen again. Looking at the statistics, around 31.07% of the host count contributed their one show before the start of 2016. If this is the norm it means that HPR needs a constant stream of new hosts.
+
There needs to be some means of attracting potential new hosts to make a contribution to HPR. In general many people are impressed by the HPR model, in my experience. The thought of doing a show is often not considered though.
+
Perhaps there is mileage in someone doing a show (or video) about breaking down the barriers for new hosts. I have encountered people who show great trepidation when thinking about recording something themselves. The question “what would I talk about?” is often the first, perhaps followed by “what would I need to do it?”. There is often the comment that “I’d feel stupid recording myself” or “I hate the sound of my own voice”.
+
+
Simplifying the submission process
+
+
It’s already quite streamlined compared to what it was. The idea droops suggested of making a phone app is really good I think. Is there anyone in the community who could do such a thing?
+
Perhaps a show describing the current process would help to explain it better? Maybe even a YouTube video?
+
+
The incentive to do show number 2 and beyond
+
+
Since I joined HPR I have felt that it’s important to give feedback on shows. I wonder how valued contributors feel, and to what extent the lack of feedback has dissuaded people from doing more shows.
+
+
+
“I did a show for HPR which I thought might be interesting but nobody said anything about it. I don’t know if I’ll bother to do another.”
+
+
+
I enjoyed the Community News shows when I first heard them and was keen that there should be a regular review of each month’s episodes. I still feel that this is an important part of HPR, along with the comments which are made on individual shows.
+
I have wondered if a “Like” button would improve the process of giving feedback on episodes, and could help with encouragement. This would probably not be trivial to implement.
+
+
What is an HPR show?
+
+
The range of show topics submitted to HPR is very broad. The impression new listeners get could be that HPR is all about highly technical topics, scripting and programming. While this is true, we have also had subjects ranging from swimming in a river in France, through cooking of various sorts, making coffee, mental health and building a bicycle. The term “of interest to Hackers” also means of interest to hobbyists or makers. The description is so wide that there is hardly anything that is not acceptable. Perhaps this fact should be made more visible to potential hosts?
+
Should we be “advertising” HPR to potential listeners, and more to the point, potential hosts as a vehicle for them to tell others about their particular interests?
+
+
More statistics
+
+
I have been delving into the HPR show database while writing this episode, and it occurs to me that maintaining a page of statistics about hosts and shows might be interesting and might help to remind contributors and potential contributors of the constant need for shows.
+
The statistics section above might serve as an example of what could be displayed on the HPR site.
+
On the more wacky side, perhaps some competitive statistics could be displayed:
+
+
Longest show in the last year or month?
+
Shortest show in the last year or month?
+
The show with the most comments?
+
Host with the most shows in the last month or year?
+
+
Personally, I’m not particularly keen on any of these ideas!
+
+
Really absurd ideas
+
+
Send out “begging letters”.
+
“Dear X, HPR is constantly in need of shows. You have been a contributor in the past but we have not heard from you for a while. If you could record another show for us it would be very helpful to the survival of the project.”
+
As the recipient of various letters and emails of this sort in other contexts, I think such a scheme would dissuade more people than it persuaded. So my vote would be against.
+
+
Conclusion
+
We have seen what “The Good Ship HPR” is, and have considered the leaky boat problem. I have offered some collected thoughts and opinions. I have no absolute answers. If you can think of other ways of ensuring there is a steadier flow of shows then please let us know - through the mailing list would be the best route.
+
Links
+
+
HPR calendar page - shows the queue and a graph of the levels over past months
+
HPR statistics page - a collection of current and historical statistics about HPR
How do I know? I wrote a Perl script to look for gaps and the last one found was between shows 1092 and 1093 on Monday October 08 2012 and Wednesday October 10 2012 respectively.↩
+
+
+
+
+
+
+
diff --git a/eps/hpr2260/hpr2260_SQLeo_1.jpg b/eps/hpr2260/hpr2260_SQLeo_1.jpg
new file mode 100755
index 0000000..d05782a
Binary files /dev/null and b/eps/hpr2260/hpr2260_SQLeo_1.jpg differ
diff --git a/eps/hpr2260/hpr2260_find_shows_sharing_tags.sql b/eps/hpr2260/hpr2260_find_shows_sharing_tags.sql
new file mode 100755
index 0000000..7faaf03
--- /dev/null
+++ b/eps/hpr2260/hpr2260_find_shows_sharing_tags.sql
@@ -0,0 +1,35 @@
+/*
+ * Given a show number find all other shows that share each of its tags.
+ * Report the tags first followed by some of the episode details
+*/
+
+-- SET @show = 383;
+
+-- Report the tags on the show just for information
+SELECT
+ tag
+FROM
+ tags
+WHERE
+ id = @show;
+
+-- Find all shows sharing the tags, omitting the target one
+SELECT
+ t.lctag, e.id, e.date, h.host, e.title
+FROM
+ tags t,
+ eps e,
+ hosts h
+WHERE
+ t.id = e.id
+ AND e.hostid = h.hostid
+ AND t.tag IN (
+ SELECT
+ tag
+ FROM
+ tags
+ WHERE
+ id = @show)
+GROUP BY e.id
+HAVING e.id <> @show
+ORDER BY t.lctag, e.id;
diff --git a/eps/hpr2260/hpr2260_full_shownotes.html b/eps/hpr2260/hpr2260_full_shownotes.html
new file mode 100755
index 0000000..70fa78a
--- /dev/null
+++ b/eps/hpr2260/hpr2260_full_shownotes.html
@@ -0,0 +1,418 @@
+
+
+
+
+
+
+
+ Managing tags on HPR episodes - 2 (HPR Show 2260)
+
+
+
+
+
+
+
+
+
+
Managing tags on HPR episodes - 2 (HPR Show 2260)
+
Dave Morriss
+
+
+
+
+
+
+
Table of Contents
+
+
+
Introduction
+
This is the second show looking at the subject of Managing Tags.
+
In the first show we looked at why we need tags, examined the present system and considered the advantages and disadvantages of doing things the current way.
+
To reiterate my disclaimer from the last episode: I am not a trained database designer. I have worked with databases a fair bit over the past 15 years or so but am largely self-taught. If I am talking nonsense, or if there are far better ways of doing what I’m suggesting please let me know!
+
Database Design
+
In the last episode we looked at method 1 (the current method) – using a comma-separated list of tag strings. Now we’ll look at an alternative method.
+
2. Using a tags table
+
Tagging has been in use on the Web for a long time, and looking around for suggestions on how best to set it up in a database I came across one solution which uses a single tags table (see the “Scuttle solution”).
+
The table, named ‘tags’ in my test database, holds three columns that I have called:
+
id - a reference to a show number in the 'eps' table
+tag - a single tag in mixed case
+lctag - a single tag in lower case form
+
The use of a mixed case and a lower case form is probably not necessary since MariaDB performs case-insensitive matches by default. I was not aware of this when I put this test table together.
+
Setting up and managing the table
+
For the purposes of testing I started by taking the tags from the comma separated list in the ‘eps’ table.
+
Method 1 using deep dark Database magic to manage the tags
+
My first approach to populating this table was to use database tools to do the work. I wrote a stored database function to parse the current ‘eps.tags’ field and extract the individual tags, and another to manage the process of populating the new ‘tags’ table. This is controlled by a piece of SQL in a file which first empties the ‘tags’ table before rebuilding it from the ‘eps’ table.
+
+
+Note: This is a fairly advanced Database Administration topic. The use of stored procedures in a database is something I have done before, but I am by no means an expert on it. I have included this mainly for interest and it can be skipped with little detriment if you are not interested.
+
+
+
The SQL file defining the table, indexes (see below) and the code to process tags is called normalise_tags_1.sql and is available on the HPR website as part of this episode. It is based on an article found on Stack Overflow.
+
The SQL I run to refresh the ‘tags’ table is called refresh_tags.sql and is also available with this show. This method empties and rebuilds the ‘tags’ table.
+
Method 2 using a Perl script to manage the tags
+
An alternative method of managing this table is included. It consists of a Perl script (refresh_tags) which adds and removes individual entries rather than clearing the table and rebuilding it.
+
The script is a work in progress. It works by scanning the entire database and collecting all of the tags stored in the ‘eps.tags’ field in CSV format (discussed at length in the last episode). It uses a Perl module ‘Text::CSV_XS’ to parse the CSV data. The script ensures the parsed CSV tags are unique per episode and sorts them internally.
+
The script then collects all of the tags already in the ‘tags’ table and organises them so that they can be compared with the first collection. Differences between the two tag sources are then built and, unless the script is in ‘dry run’ mode these are applied to the ‘tags’ table. There may be deletions if for some reason a tag has been removed from the CSV list, and additions if new tags have been added.
+
The importance of indexes
+
One of the important parts of this tag solution is the use of indexes associated with the ‘tags’ table. There is an index called ‘tags_all’ that ensures every row of the table is unique – it makes no sense to have the same tag repeated for an episode, for example. There are indexes on the other fields as well which are intended to speed up access.
+
I am running this locally on a copy of the HPR database, and in my experiments it takes less than two seconds to rebuild the table starting from scratch and considerably less to update using the Perl script.
+
The table contains repeated instances of a tag, one for each matching episode number:
+
MariaDB [hpr_hpr]> SELECT id,tag FROM tags WHERE tag = 'grep' ORDER BY id;
++------+------+
+| id | tag |
++------+------+
+| 2040 | grep |
+| 2072 | grep |
++------+------+
+2 rows in set (0.00 sec)
+
(You would expect ‘grep’ to appear more often than this, but it doesn’t at the moment due to the relative scarcity of tags)
+
Advantages
+
+
This approach gives a much more reliable and efficient solution to the problem of storing and finding tags.
+
The separate table allows for indexes to be built to optimise access speeds, which is something that cannot be done for the comma-separated string approach.
+
+
Disadvantages
+
+
Using method 1 (stored procedures) the solution requires the clearing out and repopulating of the tags table. This means that during the time when the table is empty and being rebuilt the tag functionality is missing or reduced. When using method 2 and the Perl script mentioned above this disadvantage vanishes.
+
The approach of doing the parsing of the comma-separated list with SQL (method 1) is not ideal since it does not allow us to use properly formatted CSV data. For example, if a tag with a comma is enclosed in quotes to “protect” the comma this solution doesn’t recognise it. As before, when using the Perl script described above for method 2 this problem vanishes.
+
In database terms it is desirable to “normalise” the structure as much as possible. That means that storing duplicate values like the same tag over and over again is frowned upon. This solution is not “normalised”.
+
+
Searching
+
We can now perform much more sophisticated searches since the work of parsing and extracting tags has already been done – when the table was built. Unlike the solution discussed in the first show where each search requires the tag list per episode to be examined and parsed, this solution is much more efficient. The parsing is performed once and the results stored.
+
So if we want to be able to follow droops’ suggestion to examine the tags on a given show and find all the other shows that share the same tags we can, as demonstrated below.
+
Finding shows with a given tag
+
In the last episode we searched using the ‘tags’ field in the ‘eps’ table, but now we can use the ‘tags’ table to find all the shows associated with the tag ‘community’, and we then report those shows. It’s necessary to order the result since the rows will not be sorted:
+
+
MariaDB [hpr_hpr]> SELECT e.id,date,substr(title,1,30) AS title,tags FROM eps e, tags t WHERE e.id = t.id AND tag = 'community' ORDER BY id;
++------+------------+--------------------------------+---------------------------------------------------------------+
+| id | date | title | tags |
++------+------------+--------------------------------+---------------------------------------------------------------+
+| 1 | 2007-12-31 | Introduction to HPR | hpr, twat, community |
+| 947 | 2012-03-20 | Presentation by Jared Smith at | Fedora,community |
+| 1000 | 2012-05-31 | Episode 1000 | HPR,community,congratulations |
+| 1024 | 2012-07-05 | Episode 1024 | HPR,community,anniversary |
+| 1509 | 2014-05-15 | HPR Needs Shows | HPR, shows, request, call to action, community, contribute |
+| 1913 | 2015-12-02 | The Linux Experiment | linux, the linux experiment, community |
+| 2008 | 2016-04-13 | HPR needs shows to survive. | HPR,community,shows,call to action,contribute |
+| 2035 | 2016-05-20 | Building Community | community |
+| 2077 | 2016-07-19 | libernil.net and self hosting | gnu, linux, networking, community, servers, services, commons |
++------+------------+--------------------------------+---------------------------------------------------------------+
+9 rows in set (0.00 sec)
+
+
+
+
+
Skip if not interested
+
This SQL query is more complex than those seen before. It is selecting from the ‘eps’ and ‘tags’ tables at once. This is called a ‘JOIN’ and the two tables are given aliases (‘e’ and ‘t’) to save typing.
+
The first clause after the ‘WHERE’ is matching the ‘id’ fields in the two tables so we only get episodes related to tags (and vice versa). The clause ‘tag = 'community'’ selects just those matching tags and therefore the related episodes.
+In case you were wondering the part ‘substr(title,1,30) AS title’ trims the title to 30 characters to fit these notes, and the ‘AS title’ part just ensures there’s a sensible name over the output column.
+
+
+
+
Note that the database used has the raw, unprocessed ‘eps.tags’ field. The cleaned form is not needed when using the ‘tags’ table managed by the Perl script because the script processes the tags internally.
+
I used the SQLeo tool mentioned by Ken Fallon in HPR episode 1965 and have included an image of the query we have just examined:
+
+Screenshot from SQLeo
+
Note that there are no lines connecting the tables and that is because we don’t have foreign keys in the HPR database. I think this is because of its origins, perhaps before MySQL had reliably implemented this capability? Also the types of tables used do not support foreign keys. It is my view that this is another area that needs attention in the HPR database.
+
Finding shows with combinations of tags
+
In the last show we looked at ways in which the combination of tags ‘community’ and/or ‘HPR’ could be searched for in the CSV tags. Using the ‘tags’ table this is simpler (in database terms).
+
+
MariaDB [hpr_hpr]> SELECT e.id,e.date,substr(e.title,1,30) AS title,tags FROM eps e, tags t WHERE e.id = t.id AND t.tag IN ('community','hpr') GROUP BY e.id;
++------+------------+--------------------------------+-------------------------------------------------------------------------------------------------------------------+
+| id | date | title | tags |
++------+------------+--------------------------------+-------------------------------------------------------------------------------------------------------------------+
+| 1 | 2007-12-31 | Introduction to HPR | hpr, twat, community |
+| 947 | 2012-03-20 | Presentation by Jared Smith at | Fedora,community |
+| 1000 | 2012-05-31 | Episode 1000 | HPR,community,congratulations |
+| 1024 | 2012-07-05 | Episode 1024 | HPR,community,anniversary |
+| 1371 | 2013-11-04 | The Lost Banner of HPR | hpr,banner |
+| 1509 | 2014-05-15 | HPR Needs Shows | HPR, shows, request, call to action, community, contribute |
+| 1726 | 2015-03-16 | 15 Excuses not to Record a sho | hpr,podcasting,tips,techniques,kw,knightwise,excuses |
+| 1818 | 2015-07-22 | Review of HPR's Interview Reco | Zoom H1, microphone, recording, review, DVR, digital voice recorder, tutorial, getting started, guide, howto, HPR |
+| 1877 | 2015-10-13 | Recording HPR on the fly on yo | android, hpr, audio, recording |
+| 1913 | 2015-12-02 | The Linux Experiment | linux, the linux experiment, community |
+| 2008 | 2016-04-13 | HPR needs shows to survive. | HPR,community,shows,call to action,contribute |
+| 2035 | 2016-05-20 | Building Community | community |
+| 2077 | 2016-07-19 | libernil.net and self hosting | gnu, linux, networking, community, servers, services, commons |
+| 2195 | 2016-12-30 | All you need to know when uplo | HPR |
++------+------------+--------------------------------+-------------------------------------------------------------------------------------------------------------------+
+14 rows in set (0.00 sec)
+
+
This is like the ‘OR’ example in part 1.
+
The ‘GROUP BY’ clause is a way of de-duplicating the result. Without it 19 rows are returned but 5 are duplicates.
+
+
MariaDB [hpr_hpr]> SELECT e.id,e.date,substr(e.title,1,30) AS title, tags FROM eps e, tags t WHERE e.id = t.id AND t.tag in ('community','hpr') GROUP BY e.id HAVING count(e.id) = 2;
++------+------------+-----------------------------+------------------------------------------------------------+
+| id | date | title | tags |
++------+------------+-----------------------------+------------------------------------------------------------+
+| 1 | 2007-12-31 | Introduction to HPR | hpr, twat, community |
+| 1000 | 2012-05-31 | Episode 1000 | HPR,community,congratulations |
+| 1024 | 2012-07-05 | Episode 1024 | HPR,community,anniversary |
+| 1509 | 2014-05-15 | HPR Needs Shows | HPR, shows, request, call to action, community, contribute |
+| 2008 | 2016-04-13 | HPR needs shows to survive. | HPR,community,shows,call to action,contribute |
++------+------------+-----------------------------+------------------------------------------------------------+
+5 rows in set (0.00 sec)
+
+
This one is like the ‘AND’ example in part 1.
+
The query is the same as the previous one except for the ‘HAVING’ clause. This causes the database engine to count the number of rows with the same id value and only show those where there are two.
+
Note that the 14 rows in the ‘OR’ example and the 5 rows in the ‘AND’ example add to 19, which is how many times these two tags occur in the database.
+
Finding shows related to a given show using tags
+
We probably want to make a more complex query that takes a show number, uses its tags and searches for other shows that also use them, in order to do what droops suggested. I wrote a query which I have included as file called find_shows_sharing_tags.sql in case anyone is interested.
/*
+ * Given a show number find all other shows that share each of its tags.
+ * Report the tags first followed by some of the episode details
+*/
+
+-- SET @show = 383;
+
+-- Report the tags on the show just for information
+SELECT
+ tag
+FROM
+ tags
+WHERE
+ id = @show;
+
+-- Find all shows sharing the tags, omitting the target one
+SELECT
+ t.lctag, e.id, e.date, h.host, e.title
+FROM
+ tags t,
+ eps e,
+ hosts h
+WHERE
+ t.id = e.id
+ AND e.hostid = h.hostid
+ AND t.tag IN (
+ SELECT
+ tag
+ FROM
+ tags
+ WHERE
+ id = @show)
+GROUPBY e.id
+HAVING e.id <> @show
+ORDERBY t.lctag, e.id;
+
There are two queries in the file. The first one just reports the tags associated with the target show, merely for demonstration purposes. The main query reports which tags have matched which other shows, since I thought that might be useful when generating a web page around it.
+
I will not attempt to explain this in this episode. Perhaps we need a specific Database series to cover such things.
+
To make this demonstration easier the query uses a variable called @show which has to be set beforehand. The following example shows it being used to find two sets of shows:
+
+
one related to show number 2071 (“Undocumented features of Baofeng UV-5R Radio” by MrX)
+
the other related to show 2072 (“That Awesome Time I Deleted My Home Directory” by sigflup)
+
+
+
MariaDB [hpr_hpr]> SET @show = 2071;
+Query OK, 0 rows affected (0.00 sec)
+
+MariaDB [hpr_hpr]> \. find_shows_sharing_tags.sql
++---------------+
+| tag |
++---------------+
+| Amateur Radio |
+| Electronics |
+| Open Source |
++---------------+
+3 rows in set (0.00 sec)
+
++---------------+------+------------+---------------------------+-------------------------------------------------------------+
+| lctag | id | date | host | title |
++---------------+------+------------+---------------------------+-------------------------------------------------------------+
+| amateur radio | 911 | 2012-01-27 | MrX | Hobbies |
+| amateur radio | 1036 | 2012-07-23 | Joel | Setting up Your First Ham Radio Station |
+| amateur radio | 1092 | 2012-10-08 | MrGadgets | Ham Radio: The Original Tech Geek Passion |
+| amateur radio | 1701 | 2015-02-09 | Ken Fallon | FOSDEM 2015 Part 4 of 5 |
+| amateur radio | 2062 | 2016-06-28 | MrX | Now The Chips Are Definitely Down |
+| electronics | 1817 | 2015-07-21 | NYbill | Gathering Parts |
+| electronics | 1858 | 2015-09-16 | NYbill | Multimeter Mod's Part 2 |
+| electronics | 1971 | 2016-02-22 | Dave Morriss | BlinkStick |
+| electronics | 2029 | 2016-05-12 | NYbill | The DSO138 Oscilloscope Kit |
+| electronics | 2044 | 2016-06-02 | NYbill | Bring on the Power! |
+| electronics | 2056 | 2016-06-20 | Tony Hughes AKA TonyH1212 | Interview with a young hacker |
+| electronics | 2089 | 2016-08-04 | MrX | Solving a blinkstick python problem |
+| electronics | 2148 | 2016-10-26 | NYbill | The DSO138 Oscilloscope Kit Part 2 |
+| open source | 242 | 2008-12-03 | UTOSC | Open Source in Government Panel Discussion |
+| open source | 1402 | 2013-12-17 | Thaj Sara | How I Started Using Linux and Free and Open Source Software |
+| open source | 1529 | 2014-06-12 | Ahuka | TrueCrypt, Heartbleed, and Lessons Learned |
+| open source | 1641 | 2014-11-17 | johanv | The real reasons for using Linux |
+| open source | 1653 | 2014-12-03 | Ahuka | Ruth Suehle at Ohio Linux Fest 2014 |
+| open source | 1682 | 2015-01-13 | daw | Introduction to the Netizen Empowerment Federation |
+| open source | 1686 | 2015-01-19 | Steve Bickle | Interview with Joel Gibbard of OpenHand |
+| open source | 1723 | 2015-03-11 | Kevie | Success With Students |
+| open source | 1736 | 2015-03-30 | Mr. Young | How I run my small business using Linux |
+| open source | 1783 | 2015-06-03 | GNULinuxRTM | Windows To Linux - Better Late Than Never. |
+| open source | 1788 | 2015-06-10 | Kevie | Podcrawl Glasgow 2015 |
+| open source | 1917 | 2015-12-08 | klaatu | OpenSource.com |
+| open source | 1984 | 2016-03-10 | Clinton Roy | A Love Letter to linux.conf.au |
+| open source | 2036 | 2016-05-23 | Dave Morriss | Glasgow Podcrawl 2016 |
+| open source | 2155 | 2016-11-04 | Ahuka | Ohio LinuxFest 2016 |
+| open source | 2170 | 2016-11-25 | Ken Fallon | soundtrap.io |
+| open source | 2182 | 2016-12-13 | spaceman | why say GNU/Linux ? |
++---------------+------+------------+---------------------------+-------------------------------------------------------------+
+30 rows in set (0.00 sec)
+
+MariaDB [hpr_hpr]> SET @show = 2072;
+Query OK, 0 rows affected (0.00 sec)
+
+MariaDB [hpr_hpr]> \. find_shows_sharing_tags.sql
++------------+
+| tag |
++------------+
+| dd |
+| filesystem |
+| grep |
++------------+
+3 rows in set (0.00 sec)
+
++-------+------+------------+---------+-----------------+
+| lctag | id | date | host | title |
++-------+------+------------+---------+-----------------+
+| grep | 2040 | 2016-05-27 | matthew | Why I Use Linux |
++-------+------+------------+---------+-----------------+
+1 row in set (0.00 sec)
+
+
It is interesting that we only get one other show back in the second case. I think this demonstrates the shortage of good tags in the database at the moment.
+
Need I say more? ☺
+
Using regular expressions
+
This one is just for fun. I was experimenting with other query types and came up with this one that looks for a partial tag using a regular expression. The tag being searched for is anything containing a word ending in ‘working’. The expression ‘[[:>:]]’ is MySQL/MariaDB’s regexp word boundary operator.
+
+
MariaDB [hpr_hpr]> SELECT e.id,date,h.host,title,e.tags AS eps_tags,(SELECT group_concat(tag) FROM tags GROUP BY id HAVING id = e.id) AS taglist FROM eps e JOIN hosts h USING (hostid) JOIN tags t USING (id) WHERE t.tag REGEXP 'working[[:>:]]' GROUP BY e.id;
++------+------------+----------------------+-------------------------------------------------------------------+---------------------------------------------------------------+---------------------------------------------------------+
+| id | date | host | title | eps_tags | taglist |
++------+------------+----------------------+-------------------------------------------------------------------+---------------------------------------------------------------+---------------------------------------------------------+
+| 1121 | 2012-11-19 | klaatu | Klaatu continues his Networking Basics series with a SAMBA howto. | networking,SMB,CIFS,SAMBA,file server,NFS,AFP | AFP,CIFS,file server,networking,NFS,SAMBA,SMB |
+| 1127 | 2012-11-27 | klaatu | AFP file share on a Linux server | networking,AFP,Apple Filing Protocol,Netatalk | AFP,Apple Filing Protocol,Netatalk,networking |
+| 1193 | 2013-02-27 | Ken Fallon | Chris Conder Catchup on Broadband for Rural North | networking,broadband,fibre optic,Lancashire | broadband,fibre optic,Lancashire,networking |
+| 1774 | 2015-05-21 | Jon Kulp | Router Hacking | Networking, Routers, Printer Setup, dd-wrt, tomato, openwrt | dd-wrt,Networking,openwrt,Printer Setup,Routers,tomato |
+| 1954 | 2016-01-28 | Jon Kulp | Grandpa Shows Us How to Turn Custom Pens | DIY, pens, woodworking, lathe, writing instruments | DIY,lathe,pens,woodworking,writing instruments |
+| 2077 | 2016-07-19 | Christopher M. Hobbs | libernil.net and self hosting for friends and family | gnu, linux, networking, community, servers, services, commons | commons,community,gnu,linux,networking,servers,services |
++------+------------+----------------------+-------------------------------------------------------------------+---------------------------------------------------------------+---------------------------------------------------------+
+6 rows in set (0.03 sec)
+
+
The query computes one of the fields it returns by scanning the ‘tags’ table using group_concat which concatenates multiple rows into a list. It uses this method to display the tags held in the ‘eps’ table with their equivalents in the ‘tags’ table for comparison.
+
Conclusion
+
I probably do not need to say that I prefer this solution to the one discussed in the last episode.
+
The down side as far as my original solution using only SQL and stored procedures is concerned is that the ‘tags’ table regeneration is overkill. On the other hand, as mentioned above, the Perl script (refresh_tags) only operates on the differences and so seems a lot better.
+
Both of these approaches rely on the fact that the current ‘tags’ field in the ‘eps’ table provides the raw information, and tags in this form are easy to manage.
+
Note that it makes no difference to the Perl solution whether the CSV data in the ‘eps.tags’ field is properly formatted or not (within limits anyway). With the much more simplistic SQL-only solution then this matters a lot.
+
Of course, it would be possible to replace the ‘eps.tags’ field with a ‘tags’ table. This would require software to written to display and edit tags per episode. This would not be difficult.
+
Epilogue
+
As before a couple of requests regarding tags:
+
+
Please include tags when uploading your shows. Just add a few keywords to the tags field reflecting what your show was about or topics you spoke about.
+
Note: more contributions to the project to add missing tags will always be welcome! Visit the page on the HPR website listing missing summaries and tags to find out how you could help.
+
+
diff --git a/eps/hpr2260/hpr2260_normalise_tags_1.sql b/eps/hpr2260/hpr2260_normalise_tags_1.sql
new file mode 100755
index 0000000..7bc1678
--- /dev/null
+++ b/eps/hpr2260/hpr2260_normalise_tags_1.sql
@@ -0,0 +1,114 @@
+/*
+ * Define a function to return a particular element from a comma-delimited
+ * string. There is nothing already present in MySQL to do this.
+ *
+ * Create a table to hold the split tags, storing them in lower- and
+ * upper-case form.
+ *
+ * Define a procedure to do the work of visiting every row in the 'eps' table
+ * to extract the tags and place them in the 'tags' table with the episode id
+ * they are associated with. This could be run on a periodic basis ('call
+ * NormaliseEpisodeTags()') preceded by the statement 'DELETE FROM tags;'.
+ *
+ * With the 'tags' table filled then it can be queried for tag information as
+ * shown in the examples below.
+ *
+ * 1. To count tag frequencies (case insensitive) and show the top 50:
+ *
+ * SELECT tag,lctag,COUNT(tag) AS freq FROM tags GROUP BY tag ORDER BY COUNT(tag) DESC LIMIT 50;
+ *
+ * 2. To return the episode numbers of shows tagged with a particular word:
+ *
+ * SELECT e.id,e.date,e.title,h.host FROM eps e JOIN hosts h ON e.hostid = h.hostid
+ * WHERE e.id IN (SELECT id FROM tags WHERE lctag = 'linux');
+ *
+ * ----------------------------------------------------------------------------
+ * (These ideas were based upon the discussions at
+ * https://stackoverflow.com/questions/17942508/sql-split-values-to-multiple-rows)
+ * ----------------------------------------------------------------------------
+ */
+
+DELIMITER $$
+
+/*
+ * Create function 'strSplit'
+ *
+ * Arguments:
+ * x - string to work on
+ * delim - delimiter to split on
+ * pos - starting position
+ *
+ */
+DROP FUNCTION IF EXISTS strSplit;
+
+CREATE FUNCTION strSplit(x VARCHAR(65000), delim VARCHAR(12), pos INTEGER)
+ RETURNS VARCHAR(65000)
+BEGIN
+ DECLARE output VARCHAR(65000);
+ SET output = TRIM(
+ REPLACE(
+ SUBSTRING(
+ SUBSTRING_INDEX(x, delim, pos),
+ LENGTH(SUBSTRING_INDEX(x, delim, pos - 1)) + 1
+ ),
+ delim,
+ ''
+ )
+ );
+ IF output = '' THEN
+ SET output = null;
+ END IF;
+ RETURN output;
+END $$
+
+/*
+ * Create procedure 'NormaliseEpisodeTags'
+ *
+ * No arguments
+ *
+ */
+DROP PROCEDURE IF EXISTS NormaliseEpisodeTags;
+
+CREATE PROCEDURE NormaliseEpisodeTags()
+BEGIN
+ DECLARE i INTEGER;
+
+ SET i = 1;
+ REPEAT
+ INSERT INTO tags (id, tag, lctag)
+ SELECT id, strSplit(tags, ',', i), lower(strSplit(tags, ',', i))
+ FROM eps
+ WHERE strSplit(tags, ',', i) IS NOT NULL;
+ SET i = i + 1;
+ UNTIL ROW_COUNT() = 0
+ END REPEAT;
+END $$
+
+DELIMITER ;
+
+/*
+ * Create table 'tags'
+ *
+ */
+DROP TABLE IF EXISTS tags;
+
+CREATE TABLE tags (
+ id int(5) NOT NULL,
+ tag varchar(200),
+ lctag varchar(200)
+);
+
+-- DROP INDEX tags_all ON tags;
+CREATE UNIQUE INDEX tags_all ON tags (id,tag,lctag);
+
+-- DROP INDEX tags_id ON tags;
+CREATE INDEX tags_id ON tags (id);
+
+-- DROP INDEX tags_tag ON tags;
+CREATE INDEX tags_tag ON tags (tag);
+
+-- DROP INDEX tags_lctag ON tags;
+CREATE INDEX tags_lctag ON tags (lctag);
+
+
+-- vim: syntax=sql:ts=8:ai:tw=78:et:fo=tcrqn21:comments+=b\:--
diff --git a/eps/hpr2260/hpr2260_refresh_tags b/eps/hpr2260/hpr2260_refresh_tags
new file mode 100755
index 0000000..daef866
--- /dev/null
+++ b/eps/hpr2260/hpr2260_refresh_tags
@@ -0,0 +1,604 @@
+#!/usr/bin/env perl
+#===============================================================================
+#
+# FILE: refresh_tags
+#
+# USAGE: ./refresh_tags
+#
+# DESCRIPTION: Parse tags from the eps.tags field and use them to populate
+# the tags table. The eps tag list is definitive (though it's
+# quite limited since it's only 200 characters long), and so the
+# tags table is kept in step by adding and deleting.
+#
+# OPTIONS: ---
+# REQUIREMENTS: ---
+# BUGS: ---
+# NOTES: ---
+# AUTHOR: Dave Morriss (djm), Dave.Morriss@gmail.com
+# VERSION: 0.0.3
+# CREATED: 2016-07-17 15:59:24
+# REVISION: 2017-01-30 17:13:28
+#
+#===============================================================================
+
+use 5.010;
+use strict;
+use warnings;
+use utf8;
+
+use Carp;
+use Getopt::Long;
+use Config::General;
+use Text::CSV;
+use SQL::Abstract;
+use DBI;
+
+use Data::Dumper;
+
+#
+# Version number (manually incremented)
+#
+our $VERSION = '0.0.3';
+
+#
+# Script and directory names
+#
+( my $PROG = $0 ) =~ s|.*/||mx;
+( my $DIR = $0 ) =~ s|/?[^/]*$||mx;
+$DIR = '.' unless $DIR;
+
+#-------------------------------------------------------------------------------
+# Declarations
+#-------------------------------------------------------------------------------
+#
+# Constants and other declarations
+#
+my $basedir = "$ENV{HOME}/HPR/Database";
+my $configfile = "$basedir/.hpr_db.cfg";
+
+my ( $dbh, $sth1, $h1 );
+my ( $status, @fields );
+my ( %eps_tags, %tags_tags, %diffs );
+
+#
+# Enable Unicode mode
+#
+binmode STDOUT, ":encoding(UTF-8)";
+binmode STDERR, ":encoding(UTF-8)";
+
+#
+# Load configuration data
+#
+my $conf = Config::General->new(
+ -ConfigFile => $configfile,
+ -InterPolateVars => 1,
+ -ExtendedAccess => 1,
+);
+my %config = $conf->getall();
+
+#-------------------------------------------------------------------------------
+# Options and arguments
+#-------------------------------------------------------------------------------
+#
+# Process options
+#
+my %options;
+Options( \%options );
+
+Usage() if ( $options{'help'} );
+
+#
+# Collect options
+#
+my $verbose = ( defined( $options{verbose} ) ? $options{verbose} : 0 );
+my $dry_run = ( defined( $options{'dry-run'} ) ? $options{'dry-run'} : 1 );
+
+#-------------------------------------------------------------------------------
+# Connect to the database
+#-------------------------------------------------------------------------------
+my $dbhost = $config{database}->{host} // '127.0.0.1';
+my $dbport = $config{database}->{port} // 3306;
+my $dbname = $config{database}->{name};
+my $dbuser = $config{database}->{user};
+my $dbpwd = $config{database}->{password};
+$dbh = DBI->connect( "dbi:mysql:host=$dbhost;port=$dbport;database=$dbname",
+ $dbuser, $dbpwd, { AutoCommit => 1 } )
+ or croak $DBI::errstr;
+
+#
+# Enable client-side UTF8
+#
+$dbh->{mysql_enable_utf8} = 1;
+
+my $csv = Text::CSV_XS->new;
+
+#-------------------------------------------------------------------------------
+# Collect and process the id numbers and tags from the 'eps' table
+#-------------------------------------------------------------------------------
+%eps_tags = %{ collect_eps_tags( $dbh, $verbose ) };
+
+#-------------------------------------------------------------------------------
+# Collect any tags we've already stashed in the database
+#-------------------------------------------------------------------------------
+%tags_tags = %{ collect_db_tags( $dbh, $verbose ) };
+
+#-------------------------------------------------------------------------------
+# Now compare the two sources to look for differences
+#-------------------------------------------------------------------------------
+%diffs = %{ find_differences(\%eps_tags,\%tags_tags) };
+
+#-------------------------------------------------------------------------------
+# Perform the updates if there are any
+#-------------------------------------------------------------------------------
+if (%diffs) {
+ print "Differences found\n";
+ unless ($dry_run) {
+ #
+ # Loop through all of the actions by episode number
+ #
+ foreach my $id ( sort { $a <=> $b } keys(%diffs) ) {
+
+ #
+ # Do deletions before additions
+ #
+ if ( exists( $diffs{$id}->{deletions} ) ) {
+ do_deletions( $dbh, $verbose, $id, $diffs{$id}->{deletions} );
+ }
+
+ #
+ # Do additions after deletions
+ #
+ if ( exists( $diffs{$id}->{additions} ) ) {
+ do_additions( $dbh, $sth1, $verbose, $id,
+ $diffs{$id}->{additions} );
+ }
+
+ }
+ }
+ else {
+ print "No changes made - dry run\n";
+ }
+}
+else {
+ print "No differences found\n";
+}
+
+exit;
+
+#=== FUNCTION ================================================================
+# NAME: collect_eps_tags
+# PURPOSE: Collects the tags from the eps.tags field
+# PARAMETERS: $dbh Database handle
+# $verbose Verbosity level
+# RETURNS: A reference to the hash created by collecting all the tags
+# DESCRIPTION:
+# THROWS: No exceptions
+# COMMENTS: None
+# SEE ALSO: N/A
+#===============================================================================
+sub collect_eps_tags {
+ my ( $dbh, $verbose ) = @_;
+
+ my ( $status, @fields, %hash );
+ my ( $sth, $h );
+
+ #
+ # For parsing the field as CSV
+ #
+ my $csv = Text::CSV_XS->new;
+
+ #
+ # Query the eps table for all the id and tags
+ #
+ $sth = $dbh->prepare(
+ q{SELECT id,tags FROM eps
+ WHERE length(tags) > 0
+ ORDER BY id}
+ ) or die $DBI::errstr;
+ if ( $dbh->err ) {
+ warn $dbh->errstr;
+ }
+
+ $sth->execute;
+ if ( $dbh->err ) {
+ warn $dbh->errstr;
+ }
+
+ #
+ # Loop through what we got
+ #
+ while ( $h = $sth->fetchrow_hashref ) {
+ #
+ # Parse the tag list
+ #
+ $status = $csv->parse( $h->{tags} );
+ unless ($status) {
+ #
+ # Report any errors
+ #
+ print "Parse error on episode ", $h->{id}, "\n";
+ print $csv->error_input(), "\n";
+ next;
+ }
+ @fields = $csv->fields();
+
+ next unless (@fields);
+
+ #
+ # Trim all tags (don't alter $_ when doing it)
+ #
+ @fields = map {
+ my $t = $_;
+ $t =~ s/(^\s+|\s+$)//g;
+ $t;
+ } @fields;
+
+ #print "$h->{id}: ",join(",",@fields),"\n";
+
+ #
+ # Save the id and its tags, sorted for comparison
+ #
+ $hash{ $h->{id} } = [ sort @fields ];
+
+ }
+
+ #print Dumper(\%hash),"\n";
+
+ #
+ # Dump all id numbers and tags if the verbose level is high enough
+ #
+ if ( $verbose >= 2 ) {
+ print "\nTags collected from the 'eps' table\n\n";
+ foreach my $id ( sort { $a <=> $b } keys(%hash) ) {
+ printf "%04d: %s\n", $id, join( ",", @{ $hash{$id} } );
+ }
+ }
+
+ return \%hash;
+
+}
+
+#=== FUNCTION ================================================================
+# NAME: collect_db_tags
+# PURPOSE: Collects the tags already stored in the database
+# PARAMETERS: $dbh Database handle
+# $verbose Verbosity level
+# RETURNS: A reference to the hash created by collecting all the tags
+# DESCRIPTION:
+# THROWS: No exceptions
+# COMMENTS: None
+# SEE ALSO: N/A
+#===============================================================================
+sub collect_db_tags {
+ my ( $dbh, $verbose ) = @_;
+
+ my %hash;
+ my ( $sth, $h );
+
+ #
+ # Query the database for tag data
+ #
+
+ $sth = $dbh->prepare(q{SELECT * FROM tags ORDER BY id})
+ or die $DBI::errstr;
+ if ( $dbh->err ) {
+ warn $dbh->errstr;
+ }
+
+ $sth->execute;
+ if ( $dbh->err ) {
+ warn $dbh->errstr;
+ }
+
+ #
+ # Loop through what we got building an array of tags per episode number
+ #
+ while ( $h = $sth->fetchrow_hashref ) {
+ if ( defined( $hash{ $h->{id} } ) ) {
+ push( @{ $hash{ $h->{id} } }, $h->{tag} );
+ }
+ else {
+ $hash{ $h->{id} } = [ $h->{tag} ];
+ }
+ }
+
+ #
+ # Sort all the tag arrays for comparison
+ #
+ foreach my $id ( keys(%hash) ) {
+ $hash{$id} = [ sort @{ $hash{$id} } ];
+ }
+
+ #
+ # Dump all id numbers and tags if the verbose level is high enough
+ #
+ if ( $verbose >= 2 ) {
+ print "\nTags collected from the 'tags' table\n\n";
+ foreach my $id ( sort { $a <=> $b } keys(%hash) ) {
+ printf "%04d: %s\n", $id, join( ",", @{ $hash{$id} } );
+ }
+ print '=-' x 40,"\n";
+ }
+
+ return \%hash;
+
+}
+
+#=== FUNCTION ================================================================
+# NAME: find_differences
+# PURPOSE: Find the differences between two hashes containing tags
+# PARAMETERS: $master Reference to the master hash
+# $slave Reference to the slave hash
+# RETURNS: A reference to the hash created checking for differences
+# DESCRIPTION:
+# THROWS: No exceptions
+# COMMENTS: None
+# SEE ALSO: N/A
+#===============================================================================
+sub find_differences {
+ my ($master,$slave) = @_;
+
+ my %hash;
+
+ foreach my $id ( sort { $a <=> $b } keys(%$master) ) {
+ my %iddiffs = array_compare( $master->{$id}, $slave->{$id} );
+ if (%iddiffs) {
+ if ( $verbose >= 1 ) {
+ #
+ # Report what was found if asked to
+ #
+ print "Episode: $id\n";
+ print "Update:\n\teps: ", join( ",", @{ $master->{$id} } ), "\n";
+ print "\ttags: ",
+ (
+ defined( $slave->{$id} )
+ ? join( ",", @{ $slave->{$id} } )
+ : '--None--' ), "\n";
+ print '-' x 80,"\n";
+ }
+ $hash{$id} = {%iddiffs};
+ }
+ }
+
+ #
+ # Report differences and actions if the verbose level is high enough
+ #
+ if ( $verbose >= 2 ) {
+ print "\nDifferences and actions\n\n";
+ foreach my $id ( sort { $a <=> $b } keys(%hash) ) {
+ print "Episode: $id\n";
+ if ( exists( $hash{$id}->{deletions} ) ) {
+ print "Deletions: ";
+ print join( ",", @{ $hash{$id}->{deletions} } ), "\n";
+ }
+ if ( exists( $hash{$id}->{additions} ) ) {
+ print "Additions: ";
+ print join( ",", @{ $hash{$id}->{additions} } ), "\n";
+ }
+ print '-' x 80, "\n";
+ }
+ }
+
+ return \%hash;
+}
+
+#=== FUNCTION ================================================================
+# NAME: do_deletions
+# PURPOSE: Perform any deletions indicated in an array for a given
+# episode
+# PARAMETERS: $dbh Database handle
+# $verbose Verbosity level
+# $id Episode number
+# $tags Reference to an array of tags for this episode
+# RETURNS: Nothing
+# DESCRIPTION:
+# THROWS: No exceptions
+# COMMENTS: None
+# SEE ALSO: N/A
+#===============================================================================
+sub do_deletions {
+ my ( $dbh, $verbose, $id, $tags ) = @_;
+
+ my ( $stmt, @bind, %data, %where );
+
+ #
+ # We will dynamically build SQL as we go
+ #
+ my $sql = SQL::Abstract->new;
+
+ #
+ # Process the list of tags we have been given
+ #
+ for my $i ( 0 .. $#$tags ) {
+ #
+ # Set up a deletion '... where id = ? and tag = ?'
+ #
+ %where = ( id => $id, tag => $tags->[$i] );
+
+ ( $stmt, @bind ) = $sql->delete( 'tags', \%where );
+
+ my $sth = $dbh->prepare($stmt);
+ my $rv = $sth->execute(@bind);
+ if ( $dbh->err ) {
+ warn $dbh->errstr;
+ }
+ $rv = 0 if ( $rv eq '0E0' );
+
+ #
+ # Report the action
+ #
+ if ($rv) {
+ print "Deleted tag for show $id ($tags->[$i])\n";
+ }
+
+ }
+
+ print "Deleted ", scalar(@$tags), " row",
+ ( scalar(@$tags) != 1 ? 's' : '' ), "\n";
+
+}
+
+#=== FUNCTION ================================================================
+# NAME: do_additions
+# PURPOSE: Perform any additions indicated in an array for a given
+# episode
+# PARAMETERS: $dbh Database handle
+# $sth A prepared database handle with a query to
+# search for the target tag
+# $verbose Verbosity level
+# $id Episode number
+# $tags Reference to an array of tags for this episode
+# RETURNS: Nothing
+# DESCRIPTION: FIXME
+# THROWS: No exceptions
+# COMMENTS: None
+# SEE ALSO: N/A
+#===============================================================================
+sub do_additions {
+ my ( $dbh, $sth, $verbose, $id, $tags ) = @_;
+
+ my ( $sth1, $rv, $h, $tid, $stmt, @bind, %data );
+
+ #
+ # We will dynamically build SQL as we go
+ #
+ my $sql = SQL::Abstract->new;
+
+ my @lctags = map { lc($_) } @$tags;
+
+ for my $i ( 0 .. $#$tags ) {
+ #
+ # Build the row we're going to add
+ #
+ %data = (
+ id => $id,
+ tag => $tags->[$i],
+ lctag => $lctags[$i]
+ );
+
+ ( $stmt, @bind ) = $sql->insert( 'tags', \%data );
+
+ my $sth = $dbh->prepare($stmt);
+ my $rv = $sth->execute(@bind);
+ if ( $dbh->err ) {
+ warn $dbh->errstr;
+ }
+ $rv = 0 if ( $rv eq '0E0' );
+
+ #
+ # Report the action
+ #
+ if ($rv) {
+ print "Added tag for show $id ($tags->[$i])\n";
+ }
+ }
+
+ print "Added ", scalar(@$tags), " row",
+ ( scalar(@$tags) != 1 ? 's' : '' ), "\n";
+
+}
+
+#=== FUNCTION ================================================================
+# NAME: array_compare
+# PURPOSE: Compares the elements of two arrays to see if an element
+# present in the master is also present in the slave
+# PARAMETERS: $arr1 A reference to the first array; the MASTER
+# $arr2 A reference to the second array; the SLAVE
+# RETURNS: A hash containing arrays of additions and deletions of the
+# elements that are different. The structure is:
+# {
+# additions => [ tag1, tag2 .. tagn ],
+# deletions => [ tag1, tag2 .. tagn ],
+# }
+# The returned hash will be empty if there are no differences.
+# DESCRIPTION: The requirement is to find if there are differences, then to
+# find what they are so that other code can make the slave array
+# match the master. The two arrays come from a database, so
+# we're trying to make a second source (slave) equal the first
+# (master).
+# THROWS: No exceptions
+# COMMENTS: None
+# SEE ALSO: N/A
+#===============================================================================
+sub array_compare {
+ my ( $arr1, $arr2 ) = @_;
+
+ my %res;
+ my ( @additions, @deletions );
+ my %h1 = map { $_ => 1 } @$arr1;
+ my %h2 = map { $_ => 1 } @$arr2;
+
+ #
+ # Find additions
+ #
+ for my $key ( keys(%h1) ) {
+ unless ( exists( $h2{$key} ) ) {
+ push( @additions, $key );
+ }
+ }
+
+ #
+ # Find deletions
+ #
+ for my $key ( keys(%h2) ) {
+ unless ( exists( $h1{$key} ) ) {
+ push( @deletions, $key );
+ }
+ }
+
+ $res{additions} = [@additions] if @additions;
+ $res{deletions} = [@deletions] if @deletions;
+
+ return %res;
+}
+
+#=== FUNCTION ================================================================
+# NAME: Usage
+# PURPOSE: Display a usage message and exit
+# PARAMETERS: None
+# RETURNS: To command line level with exit value 1
+# DESCRIPTION: Builds the usage message using global values
+# THROWS: no exceptions
+# COMMENTS: none
+# SEE ALSO: n/a
+#===============================================================================
+sub Usage {
+ print STDERR <
+
+
+
+
+
+
+ Managing tags on HPR episodes - 3 (HPR Show 2270)
+
+
+
+
+
+
+
+
+
+
Managing tags on HPR episodes - 3 (HPR Show 2270)
+
Dave Morriss
+
+
+
+
+
+
+
Table of Contents
+
+
+
+
+
Introduction
+
This is the third (and last) show looking at the subject of Managing Tags relating to HPR shows.
+
In the first show we looked at why we need tags, and examined the advantages and disadvantages of the present system of storage. We considered the drawbacks of this design when searching the tags.
+
Then in the second show we looked at a simple way of making a tags table and how to query it in order to fulfil the requirements defined in the first show.
+
In this show we’ll look at a more rigorous, efficient, “normalised” solution.
+
Database Design
+
In this episode we have finally arrived at a design which a database designer would choose. It has been a fairly long journey, but the object was to examine alternatives and evaluate them.
+
3. Many-to-many relationship
+
In the last episode we looked at a design where we built and managed a tag table. One disadvantage with the method used is that the table contains the same tag multiple times. It also fails to conform to the accepted database design recommendations.
+
The solution described in show 2 works and is a considerable improvement on the first solution. However, this design does not really reflect the relationship between HPR episodes and tags. This relationship is what database designers call “many-to-many”.
+
What this means is that a given episode may have many tags and a given tag may be associated with many episodes. This way of doing things was explained very well by Mike Ray in episode 1569, “Many-to-many data relationship howto”. I would recommend you listen to that show if you’d like a good understanding of how to set up such a relationship in a database.
+
In such a design one copy of each tag would be held in a tags table, and there would be a second linking (or cross-reference) table joining episodes and tags.
+
The following is a simplistic diagram which represents imaginary show number 1234 in an imaginary episodes table being associated with the tag “banana”. The linkage is is via the joining table which shows the association between episode 1234 and the tag which is stored with id 456.
+
+Episode 1234 is associated with tag number 456 “banana”
+
+
Setting up and managing the tables
+
As with the previous method, I am using the comma separated list in the ‘eps’ table to populate the new tables. However, this time I am not trying to write stored procedures and functions to work on these tables, but have developed a Perl script instead, which we will look at later.
+
The SQL which defines the new tables is included with this show, and is called normalise_tags_2.sql. This is shown below:
/*
+ * -----------------------------------------------------------------------------
+ * Many-to-many tag tables
+ *
+ * The 'DROP ...' things are in this file in case we plan to regenerate
+ * everything, perhaps after a table design change.
+ * -----------------------------------------------------------------------------
+ *
+ * .............................................................................
+ *
+ * Create table 'tags2'
+ *
+ * This holds all tags with an associated index. It's called 'tags2' because
+ * we already have a 'tags' table demonstrating an alternative tag solution.
+ *
+ * .............................................................................
+ *
+ */
+-- DROP TABLE IF EXISTS tags2;
+CREATETABLE tags2 (
+ idint(5) PRIMARYKEYNOTNULL AUTO_INCREMENT,
+ tag varchar(200) NOTNULL,
+ lctag varchar(200) NOTNULL
+);
+
+/*
+ * An index to make it easier to find tags and to enforce uniqueness
+ */
+
+-- DROP INDEX tags2_tag ON tags2;
+CREATEUNIQUEINDEX tags2_tag ON tags2 (tag);
+
+/*
+ * .............................................................................
+ *
+ * Create table 'eps_tags2_xref'
+ *
+ * This is the cross reference or 'joining' table
+ *
+ * .............................................................................
+ *
+ */
+-- DROP TABLE IF EXISTS eps_tags2_xref;
+CREATETABLE eps_tags2_xref (
+ eps_id int(5) NOTNULL,
+ tags2_id int(5) NOTNULL
+);
+
+/*
+ * Make a primary key from the two columns
+ */
+-- DROP INDEX all_eps_tags2 ON eps_tags2_xref;
+CREATEUNIQUEINDEX all_eps_tags2 ON eps_tags2_xref (eps_id,tags2_id);
+
+/*
+ * Make a tag id index to speed deletion (special case)
+ */
+-- DROP INDEX all_tags ON eps_tags2_xref;
+CREATEINDEX all_tags ON eps_tags2_xref (tags2_id);
+
+
+-- vim: syntax=sql:ts=8:ai:tw=78:et:fo=tcrqn21:comments+=b\:--
+
+
A new table of tags called ‘tags2’ is defined which is equivalent to the ‘tags’ table we saw in the last episode. It will contain only single instances of each tag. As mentioned before, there is probably no need to hold both the mixed case and lower case versions of the tags.
+
The table ‘eps_tags2_xref’ is the joining table. It contains a column ‘eps_id’ which is the key of a row in the ‘eps’ table. The column ‘tags2_id’ is the key of a row in the ‘tags2’ table.
+
The joining table will contain multiple references to episodes and to tags, but there will never be a row containing both the same episode id and the same tags id combination. This is enforced by the index all_eps_tags2 which indexes the combined keys.
+
The index ‘tags2_tag’ on the ‘tags2’ table ensures that the ‘tag’ field is unique.
+
The index ‘all_tags’ indexes the ‘tags2_id’ column of the ‘eps_tags2_xref’ table where it helps speed up deletion of joining entries when a tag association with an episode is removed (which can happen when editing tags on a show).
+
+
Foreign keys
+
The way in which these new tables are set up does not use the full relational capabilities of MariaDB (“referential integrity”). By default MariaDB database tables do not support foreign keys. This feature can be enabled by defining the tables as having the type InnoDB as opposed to the default MyISAM. None of the tables in the HPR database are defined as InnoDB at the moment.
+
The concept of foreign keys is a way of making database tables dependent on one another - of defining relationships between them. The field ‘eps_id’ in the ‘eps_tags2_xref’ table is an episode id number in the ‘eps’ table. It should only contain episode id numbers which match those in the ‘eps’ table. Making it a foreign key linked to the ‘id’ field in the ‘eps’ table ensures that this is so.
+
A similar foreign key is the ‘tags2_id’ field which contains the id number of a tag in the ‘tags2’ table.
+
Another advantage of having foreign keys here would be that the database itself can ensure consistency. This is achieved with features like cascading deletion. What this means is that if all references to a tag in the ‘eps_tags2_xref’ are deleted, potentially leaving a tag “orphaned”, the database can configured to delete the tag.
+
Mike Ray’s show referenced earlier (episode 1569) covers these issues very well.
+
Perl script ‘refresh_tags_2’
+
This script (refresh_tags_2) is quite complex, so we will not look at it in detail here.
+
In essence it scans the ‘eps’ table in the database, collecting all of the tags stored in CSV form with the episode number they belong to. It also collects the tags already stored in the ‘tags2’ table and stores them with the associated episode number.
+
Then it can compare the two sets of tags, noting differences. If a new tag has appeared it can add it. If a tag has disappeared it can delete it. It manages the joining table ‘eps_tags2_xref’ with the ‘tags2’ table.
+
The script also performs actions that the database itself could carry out if the tables allowed foreign keys. As mentioned elsewhere, one of the changes needed to operate the HPR database properly is to enable the foreign key features.
+
Advantages
+
+
This method of storing tags is the most efficient one.
+
This method is vastly preferable to the comma separated variable method examined in episode one of this mini-series.
+
It is also preferable to the method shown in the last episode because a given tag is stored only once. This means that making a spelling correction to a tag, for example, need only be done once.
+
With the use of full relational capabilities (foreign keys, cascading deletion) this design will remain self-consistent.
+
+
Disadvantages
+
+
Although this is the best solution in terms of database design the concepts can be a little daunting to people less experienced in the ways of relational databases.
+
+
Searching
+
Finding shows with a given tag
+
The query needed to find all shows with the tag ‘community’ is a little more complex now since we have to use three tables: ‘eps’, ‘eps_tags2_xref’ and ‘tags2’.
+
SELECT e.id,e.date,e.title,e.tags
+FROM eps e, eps_tags2_xref et, tags2 t
+WHERE e.id = et.eps_id
+AND et.tags2_id = t.id
+AND t.tag = 'community'
+
Note: Since writing the notes for the last show I have found out how to run database queries from the templating system I use to make show notes, and have used this to make an HTML table, which I hope is clearer.
+
+
+
+
+
+
+Show
+
+
+Date
+
+
+Title
+
+
+Tags
+
+
+
+
+1
+
+
+2007-12-31
+
+
+Introduction to HPR
+
+
+hpr, twat, community
+
+
+
+
+947
+
+
+2012-03-20
+
+
+Presentation by Jared Smith at the Columbia Area Linux Users Group
+
+
+Fedora,community
+
+
+
+
+1000
+
+
+2012-05-31
+
+
+Episode 1000
+
+
+HPR,community,congratulations
+
+
+
+
+1024
+
+
+2012-07-05
+
+
+Episode 1024
+
+
+HPR,community,anniversary
+
+
+
+
+1509
+
+
+2014-05-15
+
+
+HPR Needs Shows
+
+
+HPR, shows, request, call to action, community, contribute
+
+
+
+
+1913
+
+
+2015-12-02
+
+
+The Linux Experiment
+
+
+linux, the linux experiment, community
+
+
+
+
+2008
+
+
+2016-04-13
+
+
+HPR needs shows to survive.
+
+
+HPR,community,shows,call to action,contribute
+
+
+
+
+2035
+
+
+2016-05-20
+
+
+Building Community
+
+
+community
+
+
+
+
+2077
+
+
+2016-07-19
+
+
+libernil.net and self hosting for friends and family
+
Note: since the last episode three more shows have been added to the database with the ‘community’ and ‘hpr’ tags so we have 17 rows returned this time.
+
If we wanted to look for shows which have both tags, then the following query would be needed:
+
SELECT e.id,e.date,e.title,e.tags
+FROM eps e, eps_tags2_xref et, tags2 t
+WHERE e.id = et.eps_id
+AND et.tags2_id = t.id
+AND t.tag IN ('community','hpr')
+GROUPBY e.id
+HAVINGcount(e.id) = 2
+
This query looks for shows having both of the two tags, so it is an ‘AND’ operation.
+
+
+
+
+
+
+Show
+
+
+Date
+
+
+Title
+
+
+Tags
+
+
+
+
+1
+
+
+2007-12-31
+
+
+Introduction to HPR
+
+
+hpr, twat, community
+
+
+
+
+1000
+
+
+2012-05-31
+
+
+Episode 1000
+
+
+HPR,community,congratulations
+
+
+
+
+1024
+
+
+2012-07-05
+
+
+Episode 1024
+
+
+HPR,community,anniversary
+
+
+
+
+1509
+
+
+2014-05-15
+
+
+HPR Needs Shows
+
+
+HPR, shows, request, call to action, community, contribute
+
+
+
+
+2008
+
+
+2016-04-13
+
+
+HPR needs shows to survive.
+
+
+HPR,community,shows,call to action,contribute
+
+
+
+
+2255
+
+
+2017-03-24
+
+
+The Good Ship HPR
+
+
+HPR,community,contribution,podcast
+
+
+
+
+
Finding shows related to a given show using tags
+
This is the same exercise we used in the last show, following droops’ suggestion. Rather than write a script to do this I have just shown the SQL queries here. There are two. The first simply shows the tags on a given episode (2071), and the second queries the database for shows (other than 2071) which have any of the same tags:
Most of the clauses in the ‘WHERE’ part of the query join together tables to make subsets. The part containing the regular expression looks for any instance of the word ‘ham’ as a distinct word. This means it can be part of a tag but not part of a word.
+
The ‘GROUP BY’ part ensures that if a tag matches twice only one episode will be returned.
+
This example is probably more complex than it needs to be.
+
+
+
+
+
+
+Show
+
+
+Date
+
+
+Title
+
+
+Host
+
+
+Tags
+
+
+
+
+6
+
+
+2008-01-08
+
+
+Part 15 Broadcasting
+
+
+dosman
+
+
+Part 15, HAM, soldering, fcc, radio
+
+
+
+
+1036
+
+
+2012-07-23
+
+
+Setting up Your First Ham Radio Station
+
+
+Joel
+
+
+Amateur radio,Ham radio
+
+
+
+
+1092
+
+
+2012-10-08
+
+
+Ham Radio: The Original Tech Geek Passion
+
+
+MrGadgets
+
+
+HAM radio,amateur radio,CB radio,Morse code
+
+
+
+
+2041
+
+
+2016-05-30
+
+
+Router Antennas More = better ?
+
+
+Lyle Lastinger
+
+
+router,antenna,ham radio
+
+
+
+
+2189
+
+
+2016-12-22
+
+
+Working Amateur Radio Satellites
+
+
+Christopher M. Hobbs
+
+
+hamradio, ham, radio, amateur, satellites, projects
+
+
+
+
+2216
+
+
+2017-01-30
+
+
+Working AO-85 with my son
+
+
+Christopher M. Hobbs
+
+
+hamradio, ham, radio, amateur, satellites, projects
+
+
+
+
+2226
+
+
+2017-02-13
+
+
+FOSDEM 2017 AW Building
+
+
+Ken Fallon
+
+
+FOSDEM 2017, coreboot, GNU GRUB, Olimex, Automotive Grade Linux, Ham radio, CorteXlab, OpenEmbedded
+
+
+
+
+2240
+
+
+2017-03-03
+
+
+Amateur Radio Round Table
+
+
+Various Hosts
+
+
+amateur radio, ham
+
+
+
+
+
Simplifying things with a VIEW
+
As an experiment I have included the definition of a ‘VIEW’ which helps to hide some of the complexity of the queries needed to use the many-to-many tables.
+
The SQL which defines the experimental view is included with this show, and is called eps_hosts_tags_view.sql. This is shown below:
/*
+ * Create a view to simplify eps, host and tag access using the many to many
+ * tag tables. This view is a demonstration of what could be done in the live
+ * database, where more views could be created, of various levels of
+ * complexity, depending on need.
+ */
+
+CREATEORREPLACEVIEW eht_view AS
+ SELECT
+ e.*,
+ h.host, h.email,
+ t.tag,
+ (SELECT group_concat(tag) FROM tags2 t2, eps_tags2_xref et2 WHERE
+ et2.tags2_id = t2.id GROUPBY et2.eps_id HAVING et2.eps_id = e.id)
+ AS taglist
+ FROM eps e, hosts h, eps_tags2_xref et, tags2 t
+ WHERE e.hostid = h.hostid
+ AND e.id = et.eps_id
+ AND et.tags2_id = t.id;
+
+-- vim: syntax=sql:ts=8:ai:tw=78:et:fo=tcrqn21:comments+=b\:--
+
The view is called ‘eht_view’ and is really a way of storing a ‘SELECT’ query for repeated use. The result is a sort of virtual table which can be used in further queries.
+
For example:
+
SELECTid,date,title,host,taglist
+FROM eht_view
+WHERE tag REGEXP '[[:<:]]solder[[:>:]]'
+GROUPBYid;
Notice that the view contains a sub-SELECT which concatenates all the tags belonging to an episode. This demonstrates that storing the tags in a CSV list as seen in episode 1 is unnecessary.
+
Conclusion
+
The HPR database is very much in need of a tag mechanism. In this mini-series we have looked at the present tag storage system and have concluded that it is not a good way to store and access tags. We have looked at a somewhat better way of achieving what is required in show 2, but have concluded that this also has drawbacks. In this third episode we have examined a better way of using a relational database to represent the true relationship between episodes and tags - a many-to-many relationship.
+
Although it will require some work, it is strongly recommended that we implement a tag scheme this way. It is also recommended that:
+
+
We enable the foreign key capabilities of MariaDB which will give many advantages when managing these new tables (and others).
+
We look at performing a similar database upgrade to enable the many-to-many relationship of hosts and episodes to be properly represented.
+
Although not as critical as the hosts/episodes relationship we should also set up a many-to-many relationship between episodes and series.
+
+
Epilogue
+
A couple of requests regarding tags:
+
+
Please include tags when uploading your shows. Just add a few keywords to the tags field reflecting what your show was about or topics you spoke about.
+
Note: more contributions to the project to add missing tags will always be welcome! Visit the page on the HPR website listing missing summaries and tags to find out how you could help.
+
+
diff --git a/eps/hpr2270/hpr2270_normalise_tags_2.sql b/eps/hpr2270/hpr2270_normalise_tags_2.sql
new file mode 100755
index 0000000..f9a4123
--- /dev/null
+++ b/eps/hpr2270/hpr2270_normalise_tags_2.sql
@@ -0,0 +1,62 @@
+/*
+ * -----------------------------------------------------------------------------
+ * Many-to-many tag tables
+ *
+ * The 'DROP ...' things are in this file in case we plan to regenerate
+ * everything, perhaps after a table design change.
+ * -----------------------------------------------------------------------------
+ *
+ * .............................................................................
+ *
+ * Create table 'tags2'
+ *
+ * This holds all tags with an associated index. It's called 'tags2' because
+ * we already have a 'tags' table demonstrating an alternative tag solution.
+ *
+ * .............................................................................
+ *
+ */
+-- DROP TABLE IF EXISTS tags2;
+CREATE TABLE tags2 (
+ id int(5) PRIMARY KEY NOT NULL AUTO_INCREMENT,
+ tag varchar(200) NOT NULL,
+ lctag varchar(200) NOT NULL
+);
+
+/*
+ * An index to make it easier to find tags and to enforce uniqueness
+ */
+
+-- DROP INDEX tags2_tag ON tags2;
+CREATE UNIQUE INDEX tags2_tag ON tags2 (tag);
+
+/*
+ * .............................................................................
+ *
+ * Create table 'eps_tags2_xref'
+ *
+ * This is the cross reference or 'joining' table
+ *
+ * .............................................................................
+ *
+ */
+-- DROP TABLE IF EXISTS eps_tags2_xref;
+CREATE TABLE eps_tags2_xref (
+ eps_id int(5) NOT NULL,
+ tags2_id int(5) NOT NULL
+);
+
+/*
+ * Make a primary key from the two columns
+ */
+-- DROP INDEX all_eps_tags2 ON eps_tags2_xref;
+CREATE UNIQUE INDEX all_eps_tags2 ON eps_tags2_xref (eps_id,tags2_id);
+
+/*
+ * Make a tag id index to speed deletion (special case)
+ */
+-- DROP INDEX all_tags ON eps_tags2_xref;
+CREATE INDEX all_tags ON eps_tags2_xref (tags2_id);
+
+
+-- vim: syntax=sql:ts=8:ai:tw=78:et:fo=tcrqn21:comments+=b\:--
diff --git a/eps/hpr2270/hpr2270_refresh_tags_2 b/eps/hpr2270/hpr2270_refresh_tags_2
new file mode 100755
index 0000000..a2b8201
--- /dev/null
+++ b/eps/hpr2270/hpr2270_refresh_tags_2
@@ -0,0 +1,769 @@
+#!/usr/bin/env perl
+#===============================================================================
+#
+# FILE: refresh_tags_2
+#
+# USAGE: ./refresh_tags_2
+#
+# DESCRIPTION: Parse tags from the eps.tags field and use them to populate
+# the eps_tags2_xref and tags2 tables. The eps tag list is
+# definitive (though it's quite limited since it's only 200
+# characters long), and so the junction table eps_tags2_xref and
+# the normalised tags table tags2 are kept in step by adding
+# and deleting.
+# This script is for demonstration purposes. It is not the
+# definitive answer to the tag management problem in the HPR
+# database, though it's close :-)
+#
+# OPTIONS: ---
+# REQUIREMENTS: ---
+# BUGS: ---
+# NOTES: ---
+# AUTHOR: Dave Morriss (djm), Dave.Morriss@gmail.com
+# VERSION: 0.0.3
+# CREATED: 2016-07-22 16:48:49
+# REVISION: 2017-03-14 21:11:33
+#
+#===============================================================================
+
+use 5.010;
+use strict;
+use warnings;
+use utf8;
+
+use Carp;
+use Getopt::Long;
+use Config::General;
+use Text::CSV;
+use SQL::Abstract;
+use DBI;
+
+use Data::Dumper;
+
+#
+# Version number (manually incremented)
+#
+our $VERSION = '0.0.3';
+
+#
+# Script and directory names
+#
+( my $PROG = $0 ) =~ s|.*/||mx;
+( my $DIR = $0 ) =~ s|/?[^/]*$||mx;
+$DIR = '.' unless $DIR;
+
+#-------------------------------------------------------------------------------
+# Declarations
+#-------------------------------------------------------------------------------
+#
+# Constants and other declarations
+#
+my $basedir = "$ENV{HOME}/HPR/Database";
+my $configfile = "$basedir/.hpr_db.cfg";
+
+my ( $dbh, $sth1, $h1, $rv );
+my ( %eps_tags, %tags_tags, %diffs );
+
+#
+# Enable Unicode mode
+#
+binmode STDOUT, ":encoding(UTF-8)";
+binmode STDERR, ":encoding(UTF-8)";
+
+#
+# Load configuration data
+#
+my $conf = Config::General->new(
+ -ConfigFile => $configfile,
+ -InterPolateVars => 1,
+ -ExtendedAccess => 1,
+);
+my %config = $conf->getall();
+
+#-------------------------------------------------------------------------------
+# Options and arguments
+#-------------------------------------------------------------------------------
+#
+# Process options
+#
+my %options;
+Options( \%options );
+
+Usage() if ( $options{'help'} );
+
+#
+# Collect options
+#
+my $verbose = ( defined( $options{verbose} ) ? $options{verbose} : 0 );
+my $dry_run = ( defined( $options{'dry-run'} ) ? $options{'dry-run'} : 1 );
+
+#-------------------------------------------------------------------------------
+# Connect to the database
+#-------------------------------------------------------------------------------
+my $dbhost = $config{database}->{host} // '127.0.0.1';
+my $dbport = $config{database}->{port} // 3306;
+my $dbname = $config{database}->{name};
+my $dbuser = $config{database}->{user};
+my $dbpwd = $config{database}->{password};
+$dbh = DBI->connect( "dbi:mysql:host=$dbhost;port=$dbport;database=$dbname",
+ $dbuser, $dbpwd, { AutoCommit => 1 } )
+ or croak $DBI::errstr;
+
+#
+# Enable client-side UTF8
+#
+$dbh->{mysql_enable_utf8} = 1;
+
+#-------------------------------------------------------------------------------
+# Collect and process the id numbers and tags from the 'eps' table
+#-------------------------------------------------------------------------------
+%eps_tags = %{ collect_eps_tags( $dbh, $verbose ) };
+
+#-------------------------------------------------------------------------------
+# Collect any tags we've already stashed in the database.
+#-------------------------------------------------------------------------------
+%tags_tags = %{ collect_db_tags( $dbh, $verbose ) };
+
+#-------------------------------------------------------------------------------
+# Now compare the two sources to look for differences
+#-------------------------------------------------------------------------------
+%diffs = %{ find_differences(\%eps_tags,\%tags_tags) };
+
+#-------------------------------------------------------------------------------
+# Perform the updates if there are any
+#-------------------------------------------------------------------------------
+if (%diffs) {
+ print "Differences found\n\n";
+ unless ($dry_run) {
+ #
+ # Scan for all deletions in the %diffs hash by traversing it by sorted
+ # episode number. If deletions are found for an episode they are
+ # performed.
+ #
+ foreach my $id ( sort { $a <=> $b } keys(%diffs) ) {
+ if ( exists( $diffs{$id}->{deletions} ) ) {
+ do_deletions( $dbh, $verbose, $id, $diffs{$id}->{deletions} );
+ }
+ }
+
+ #
+ # Prepare to search for tags
+ #
+ $sth1 = $dbh->prepare(q{SELECT * FROM tags2 WHERE tag = ?})
+ or die $DBI::errstr;
+ if ( $dbh->err ) {
+ warn $dbh->errstr;
+ }
+
+ #
+ # Scan for all additions in the %diffs hash
+ #
+ foreach my $id ( sort { $a <=> $b } keys(%diffs) ) {
+ if ( exists( $diffs{$id}->{additions} ) ) {
+ do_additions( $dbh, $sth1, $verbose, $id,
+ $diffs{$id}->{additions} );
+ }
+ }
+
+ #
+ # Having deleted all the requested rows from the junction table remove
+ # any tags that are "orphaned" as a consequence. If we were using
+ # foreign keys we could let the database do this.
+ #
+ $sth1 = $dbh->prepare(
+ q{DELETE FROM tags2
+ WHERE id NOT IN (SELECT DISTINCT tags2_id FROM eps_tags2_xref)}
+ ) or die $DBI::errstr;
+ if ( $dbh->err ) {
+ warn $dbh->errstr;
+ }
+
+ $rv = $sth1->execute;
+ if ( $dbh->err ) {
+ warn $dbh->errstr;
+ }
+ $rv = 0 if ( $rv eq '0E0' );
+
+ #
+ # Report the action
+ #
+ if ($rv) {
+ print "Deleted ", $rv, " orphan tag", ( $rv != 1 ? 's' : '' ),
+ "\n";
+ }
+
+ }
+ else {
+ print "No changes made - dry run\n";
+ }
+}
+else {
+ print "No differences found\n";
+}
+
+exit;
+
+#=== FUNCTION ================================================================
+# NAME: collect_eps_tags
+# PURPOSE: Collects the tags from the eps.tags field
+# PARAMETERS: $dbh Database handle
+# $verbose Verbosity level
+# RETURNS: A reference to the hash created by collecting all the tags
+# DESCRIPTION: FIXME
+# THROWS: No exceptions
+# COMMENTS: None
+# SEE ALSO: N/A
+#===============================================================================
+sub collect_eps_tags {
+ my ( $dbh, $verbose ) = @_;
+
+ my ( $status, @fields, %hash );
+ my ( $sth, $h );
+
+ #
+ # For parsing the field as CSV
+ #
+ my $csv = Text::CSV_XS->new;
+
+ #
+ # Query the eps table for all the id and tags
+ #
+ $sth = $dbh->prepare(
+ q{SELECT id,tags FROM eps
+ WHERE length(tags) > 0
+ ORDER BY id}
+ ) or die $DBI::errstr;
+ if ( $dbh->err ) {
+ warn $dbh->errstr;
+ }
+
+ $sth->execute;
+ if ( $dbh->err ) {
+ warn $dbh->errstr;
+ }
+
+ #
+ # Loop through what we got
+ #
+ while ( $h = $sth->fetchrow_hashref ) {
+ #
+ # Parse the tag list
+ #
+ $status = $csv->parse( $h->{tags} );
+ unless ($status) {
+ #
+ # Report any errors
+ #
+ print "Parse error on episode ", $h->{id}, "\n";
+ print $csv->error_input(), "\n";
+ next;
+ }
+ @fields = $csv->fields();
+
+ next unless (@fields);
+
+ #
+ # Trim all tags (don't alter $_ when doing it)
+ #
+ @fields = map {
+ my $t = $_;
+ $t =~ s/(^\s+|\s+$)//g;
+ $t;
+ } @fields;
+
+ #print "$h->{id}: ",join(",",@fields),"\n";
+
+ #
+ # Save the id and its tags, sorted for comparison
+ #
+ $hash{ $h->{id} } = [ sort @fields ];
+
+ }
+
+ #print Dumper(\%hash),"\n";
+
+ #
+ # Dump all id numbers and tags if the verbose level is high enough
+ #
+ if ( $verbose >= 3 ) {
+ print "\nTags collected from the 'eps' table\n\n";
+ foreach my $id ( sort { $a <=> $b } keys(%hash) ) {
+ printf "%04d: %s\n", $id, join( ",", @{ $hash{$id} } );
+ }
+ }
+
+ return \%hash;
+
+}
+
+#=== FUNCTION ================================================================
+# NAME: collect_db_tags
+# PURPOSE: Collects the tags already stored in the database
+# PARAMETERS: $dbh Database handle
+# $verbose Verbosity level
+# RETURNS: A reference to the hash created by collecting all the tags
+# DESCRIPTION:
+# THROWS: No exceptions
+# COMMENTS: None
+# SEE ALSO: N/A
+#===============================================================================
+sub collect_db_tags {
+ my ( $dbh, $verbose ) = @_;
+
+ my %hash;
+ my ( $sth, $h );
+
+ #
+ # Query the database for tag data
+ #
+ # We use the junction table (eps_tags2_xref), traversing it by episode number
+ # and linking the table of tags (tags2). This results in a list of the tags
+ # relating to an episode, which should be similar to (if not the same as) the
+ # 'tags' field in the 'eps' table.
+ #
+ $sth = $dbh->prepare(
+ q{SELECT et.eps_id AS id,t.tag,t.lctag
+ FROM eps_tags2_xref et
+ JOIN tags2 t ON et.tags2_id = t.id
+ ORDER BY et.eps_id}
+ ) or die $DBI::errstr;
+ if ( $dbh->err ) {
+ warn $dbh->errstr;
+ }
+
+ $sth->execute;
+ if ( $dbh->err ) {
+ warn $dbh->errstr;
+ }
+
+ #
+ # Loop through what we got, building an array of tags per episode number
+ #
+ while ( $h = $sth->fetchrow_hashref ) {
+ if ( defined( $hash{ $h->{id} } ) ) {
+ push( @{ $hash{ $h->{id} } }, $h->{tag} );
+ }
+ else {
+ $hash{ $h->{id} } = [ $h->{tag} ];
+ }
+ }
+
+ #
+ # Sort all the tag arrays for comparison
+ #
+ foreach my $id ( keys(%hash) ) {
+ $hash{$id} = [ sort @{ $hash{$id} } ];
+ }
+
+ #
+ # Dump all id numbers and tags if the verbose level is high enough
+ #
+ if ( $verbose >= 3 ) {
+ print "\nTags collected from the 'tags2' table\n\n";
+ foreach my $id ( sort { $a <=> $b } keys(%hash) ) {
+ printf "%04d: %s\n", $id, join( ",", @{ $hash{$id} } );
+ }
+ print '=-' x 40,"\n";
+ }
+
+ return \%hash;
+
+}
+
+#=== FUNCTION ================================================================
+# NAME: find_differences
+# PURPOSE: Find the differences between two hashes containing tags
+# PARAMETERS: $master Reference to the master hash
+# $slave Reference to the slave hash
+# RETURNS: A reference to the hash created checking for differences
+# DESCRIPTION: The function is presented with two hashes. The 'master' hash
+# has come from the CSV string in the 'eps' table. The 'slave'
+# hash has come from the table of tags 'tags2'. These hashes are
+# keyed by episode number and each element contains a reference
+# to a sorted array of tags.
+# This function compares two tag arrays for an episode using
+# function 'array_compare' and receives back a hash of additions
+# and deletions:
+# {
+# additions => [ tag1, tag2 .. tagn ],
+# deletions => [ tag1, tag2 .. tagn ],
+# }
+# These are stored in a result hash keyed by episode number, and
+# a reference to this hash is returned to the caller.
+# This function can report a lot of details about what has been
+# found if the level of verbosity is high enough.
+# THROWS: No exceptions
+# COMMENTS: None
+# SEE ALSO: N/A
+#===============================================================================
+sub find_differences {
+ my ($master,$slave) = @_;
+
+ my %hash;
+
+ foreach my $id ( sort { $a <=> $b } keys(%$master) ) {
+ my %iddiffs = array_compare( $master->{$id}, $slave->{$id} );
+ if (%iddiffs) {
+ if ( $verbose >= 1 ) {
+ #
+ # Report what was found if asked to
+ #
+ print "Episode: $id\n";
+ print "Update:\n\teps: ", join( ",", @{ $master->{$id} } ), "\n";
+ print "\ttags: ",
+ (
+ defined( $slave->{$id} )
+ ? join( ",", @{ $slave->{$id} } )
+ : '--None--' ), "\n";
+ print '-' x 80,"\n";
+ }
+ $hash{$id} = {%iddiffs};
+ }
+ }
+
+ #
+ # Report differences and actions if the verbose level is high enough
+ #
+ if ( $verbose >= 2 ) {
+ print "\nDifferences and actions\n\n";
+ foreach my $id ( sort { $a <=> $b } keys(%hash) ) {
+ print "Episode: $id\n";
+ if ( exists( $hash{$id}->{deletions} ) ) {
+ print "Deletions: ";
+ print join( ",", @{ $hash{$id}->{deletions} } ), "\n";
+ }
+ if ( exists( $hash{$id}->{additions} ) ) {
+ print "Additions: ";
+ print join( ",", @{ $hash{$id}->{additions} } ), "\n";
+ }
+ print '-' x 80, "\n";
+ }
+ }
+
+ return \%hash;
+}
+
+#=== FUNCTION ================================================================
+# NAME: do_deletions
+# PURPOSE: Perform any deletions indicated in an array for a given
+# episode
+# PARAMETERS: $dbh Database handle
+# $verbose Verbosity level
+# $id Episode number
+# $tags Reference to an array of tags for this episode
+# RETURNS: Nothing
+# DESCRIPTION: A tag deletion consists of its removal from the joining table.
+# Only when there are no more references to the actual tag can
+# it then be deleted. If the tables were in a database with
+# foreign keys then we could leave the database itself to handle
+# this (MariaDB could do it but we'd need to redefine the tables
+# to use InnoDB rather than MyISAM. The latter is the legacy
+# table structure from the days when MySQL didn't have foreign
+# keys).
+# This function does not perform the tag deletion since this
+# easier to leave until all deletions have finished.
+# THROWS: No exceptions
+# COMMENTS: None
+# SEE ALSO: N/A
+#===============================================================================
+sub do_deletions {
+ my ( $dbh, $verbose, $id, $tags ) = @_;
+
+ my ( $stmt, @bind, %data, %where );
+
+ #
+ # We will dynamically build SQL as we go
+ #
+ my $sql = SQL::Abstract->new;
+
+ #
+ # Process the list of tags we have been given
+ #
+ for my $i ( 0 .. $#$tags ) {
+ #
+ # Set up a deletion '... where eps_id = ? and
+ # tags2 = (select id from tags2 where tag = ?)'
+ #
+ my ( $sub_stmt, @sub_bind )
+ = ( "SELECT id FROM tags2 WHERE tag = ?", $tags->[$i] );
+
+ %where = (
+ eps_id => $id,
+ tags2_id => \[ "= ($sub_stmt)" => @sub_bind ]
+ );
+
+ ( $stmt, @bind ) = $sql->delete( 'eps_tags2_xref', \%where );
+ if ( $verbose >= 2 ) {
+ print "Statement: $stmt\n";
+ print "Bind: ", join( ",", @bind ), "\n";
+ }
+
+ #
+ # Do the deletion
+ #
+ my $sth = $dbh->prepare($stmt);
+ my $rv = $sth->execute(@bind);
+ if ( $dbh->err ) {
+ warn $dbh->errstr;
+ }
+ $rv = 0 if ( $rv eq '0E0' );
+
+ #
+ # Report the action
+ #
+ if ($rv) {
+ print "Deleted tag for show $id ($tags->[$i])\n";
+ }
+
+ }
+
+ print "Deleted ", scalar(@$tags), " row",
+ ( scalar(@$tags) != 1 ? 's' : '' ), "\n";
+
+}
+
+#=== FUNCTION ================================================================
+# NAME: do_additions
+# PURPOSE: Perform any additions indicated in an array for a given
+# episode
+# PARAMETERS: $dbh Database handle
+# $sth A prepared database handle with a query to
+# search for the target tag
+# $verbose Verbosity level
+# $id Episode number
+# $tags Reference to an array of tags for this episode
+# RETURNS: Nothing
+# DESCRIPTION: The addition of a tag for an episode consists of creating the
+# tag in the 'tags2' table (unless it already exists) and
+# making a joining table entry for it. This what this function
+# does.
+# FIXME: Not very resilient to failure.
+# THROWS: No exceptions
+# COMMENTS: None
+# SEE ALSO: N/A
+#===============================================================================
+sub do_additions {
+ my ( $dbh, $sth, $verbose, $id, $tags ) = @_;
+
+ my ( $sth1, $rv, $h, $tid, $stmt, @bind, %data );
+
+ #
+ # We will dynamically build SQL as we go
+ #
+ my $sql = SQL::Abstract->new;
+
+ my @lctags = map { lc($_) } @$tags;
+
+ #
+ # Loop through the array of tags (using an integer so we can index the
+ # current tag)
+ #
+ for my $i ( 0 .. $#$tags ) {
+ #
+ # Look to see if this tag exists
+ #
+ $sth->execute( $tags->[$i] );
+ if ( $dbh->err ) {
+ warn $dbh->errstr;
+ }
+
+ #
+ # If it's already in the table just store the id otherwise
+ # add a new entry
+ #
+ if ( $h = $sth->fetchrow_hashref ) {
+ $tid = $h->{id};
+ }
+ else {
+ #
+ # Build the row we're going to add
+ #
+ %data = (
+ tag => $tags->[$i],
+ lctag => $lctags[$i]
+ );
+
+ #
+ # Build the SQL, reporting the result if asked
+ #
+ ( $stmt, @bind ) = $sql->insert( 'tags2', \%data );
+ if ( $verbose >= 2 ) {
+ print "Statement: $stmt\n";
+ print "Bind: ", join( ",", @bind ), "\n";
+ }
+
+ #
+ # Add the tag to 'tags2'
+ #
+ $sth1 = $dbh->prepare($stmt);
+ $rv = $sth1->execute(@bind);
+ if ( $dbh->err ) {
+ warn $dbh->errstr;
+ }
+ $rv = 0 if ( $rv eq '0E0' );
+
+ #
+ # Ask the database foir the id we just added
+ # FIXME: what if it failed?
+ #
+ $tid = $sth1->{mysql_insertid};
+
+ #
+ # Report the action
+ #
+ if ($rv) {
+ print "Added new tag '$tags->[$i]' ($tid)\n";
+ }
+ }
+
+ #
+ # Now we know we have a tag in the tags2 table so now we can create
+ # the eps_tags2_xref entry
+ #
+ %data = (
+ eps_id => $id,
+ tags2_id => $tid
+ );
+
+ #
+ # Build the SQL, reporting the result if asked
+ #
+ ( $stmt, @bind ) = $sql->insert( 'eps_tags2_xref', \%data );
+ if ( $verbose >= 2 ) {
+ print "Statement: $stmt\n";
+ print "Bind: ", join( ",", @bind ), "\n";
+ }
+
+ #
+ # Add the row
+ #
+ $sth1 = $dbh->prepare($stmt);
+ $rv = $sth1->execute(@bind);
+ if ( $dbh->err ) {
+ warn $dbh->errstr;
+ }
+ $rv = 0 if ( $rv eq '0E0' );
+
+ #
+ # Report the action
+ #
+ if ($rv) {
+ printf "Added new junction row (eps_id=%s,tags2_id=%s -> %s)\n",
+ $id, $tid, $tags->[$i];
+ }
+
+ }
+
+ print "Added ", scalar(@$tags), " row",
+ ( scalar(@$tags) != 1 ? 's' : '' ), "\n";
+
+}
+
+#=== FUNCTION ================================================================
+# NAME: array_compare
+# PURPOSE: Compares the elements of two arrays to see if an element
+# present in the master is also present in the slave
+# PARAMETERS: $arr1 A reference to the first array; the MASTER
+# $arr2 A reference to the second array; the SLAVE
+# RETURNS: A hash containing arrays of additions and deletions of the
+# elements that are different. The structure is:
+# {
+# additions => [ tag1, tag2 .. tagn ],
+# deletions => [ tag1, tag2 .. tagn ],
+# }
+# The returned hash will be empty if there are no differences.
+# DESCRIPTION: The requirement is to find if there are differences, then to
+# find what they are so that other code can make the slave array
+# match the master. The two arrays come from a database, so
+# we're trying to make a second source (slave) equal the first
+# (master).
+# THROWS: No exceptions
+# COMMENTS: None
+# SEE ALSO: N/A
+#===============================================================================
+sub array_compare {
+ my ( $arr1, $arr2 ) = @_;
+
+ my %res;
+ my ( @additions, @deletions );
+
+ #
+ # Use hashes to make it easier to find existence of stuff
+ #
+ my %h1 = map { lc($_) => 1 } @$arr1;
+ my %h2 = map { lc($_) => 1 } @$arr2;
+
+ #
+ # Find additions
+ #
+ for my $key ( keys(%h1) ) {
+ unless ( exists( $h2{$key} ) ) {
+ push( @additions, $key );
+ }
+ }
+
+ #
+ # Find deletions
+ #
+ for my $key ( keys(%h2) ) {
+ unless ( exists( $h1{$key} ) ) {
+ push( @deletions, $key );
+ }
+ }
+
+ $res{additions} = [@additions] if @additions;
+ $res{deletions} = [@deletions] if @deletions;
+
+ return %res;
+}
+
+#=== FUNCTION ================================================================
+# NAME: Usage
+# PURPOSE: Display a usage message and exit
+# PARAMETERS: None
+# RETURNS: To command line level with exit value 1
+# DESCRIPTION: Builds the usage message using global values
+# THROWS: no exceptions
+# COMMENTS: none
+# SEE ALSO: n/a
+#===============================================================================
+sub Usage {
+ print STDERR <
+
+
\ No newline at end of file
diff --git a/eps/hpr2278/hpr2278_full_shownotes.html b/eps/hpr2278/hpr2278_full_shownotes.html
new file mode 100755
index 0000000..09f99b2
--- /dev/null
+++ b/eps/hpr2278/hpr2278_full_shownotes.html
@@ -0,0 +1,458 @@
+
+
+
+
+
+
+
+ Some supplementary Bash tips (HPR Show 2278)
+
+
+
+
+
+
+
+
+
Some supplementary Bash tips (HPR Show 2278)
+
Pathname expansion; part 1 of 2
+
Dave Morriss
+
+
+
+
+
+
+
Table of Contents
+
+
+
Expansion
+
As we saw in the last episode 2045 (and others in this sub-series) there are eight types of expansion applied to the command line in the following order:
+
+
Brace expansion (we looked at this subject in episode 1884)
This is the last topic in the (sub-) series about expansion in Bash. However, when writing the notes for this episode it became apparent that there was too much to fit into a single HPR episode. Consequently I have made it into two.
+
In this episode we will look at simple pathname expansion and some of the ways in which its behaviour can be controlled. In the next episode we’ll finish by looking at extended pattern matching. Both are included in the “Manual Page Extracts” section at the end of the long notes.
+
Pathname expansion
+
This type of expansion is also known as Filename Expansion or Globbing. It is about the expansion of wildcard characters such as ‘*’ in filenames. You have almost certainly used it in commands like:
+
ls *.txt
+
The names glob and globbing have an historical origin. In the early days of Unix this type of wildcard expansion was performed by the separate program /etc/glob, an abbreviation of the phrase “global command”. Later a library function ‘glob()’ was provided to replace it and the name has stuck since then.
+
Operating systems other than Unix, and other environments and scripting languages also have a similar concept of glob patterns using wildcard characters. The actual characters often vary from those used in Bash, but the concepts are very similar. See the Wikipedia article on this subject for more details.
+
Note that this process does not use regular expressions in the sense you will have seen in other places (such as in the HPR series called “Learning sed”). These glob patterns are older and not as sophisticated.
+
Although this process of wildcard expansion is normally used in the context of file names or paths to files, such patterns are used in other contexts as well. When we looked at parameter and variable expansion in episode 1648 we saw expressions such as:
Here ‘*/’ matches a part of the path in variable ‘dir’ and the operation strips it all away leaving just the terminal filename in the same way as the ‘basename’ command.
+
Making test files
+
To have some files to experiment with for this episode (and the next one) I created a series of directories and files within them:
+
$ mkdir Pathname_expansion
+$ cd Pathname_expansion
+$ mkdir {a..z}
+$ for d in {a..z}
+> do
+> touch $d/${d}{a..z}{01..50}.txt
+> done
The line beginning ‘for’ is the start of a multi-line command; the ‘>’ means Bash is prompting for the next line:
+
+
Loop through the directories just created
+
Using ‘touch’ create a series of files in each one. The files begin with the letter of the directory, followed by another letter, followed by a two-digit number in the range 01-50, followed by ‘.txt’.
+
Note that to use variable ‘d’ as part of the filename it needs to be enclosed in ‘{}’ braces to separate it from the following brace expansion.
+
+
+
Each directory will therefore contain 26*50=1,300 files making a total of 33,800 (empty) files.
+
Using the test files
+
So, now we need to look at how these various files could be referred to using pathname expansion.
+
According to the manual page:
+
+
Bash scans each word for the characters ‘*’, ‘?’, and ‘[’. If one of these characters appears, then the word is regarded as a pattern, and replaced with an alphabetically sorted list of filenames matching the pattern.
+
+
These pattern characters have the following meanings (see the “Manual Page Extracts” section below under the heading “Pattern Matching” for a more detailed description):
+
+
*
+
Matches any string, including the null string.
+
+
?
+
Matches any single character.
+
+
[…]
+
Matches any one of the enclosed characters, such as [abc].
+
A pair of characters separated by a hyphen (such as [a-z]) denotes a range expression; any character that falls between those two characters, inclusive is matched.1
+
If the first character following the ‘[’ is a ‘!’ (exclamation mark) or a ‘^’ (circumflex) then any character not enclosed is matched. Note that ‘!’ is the POSIX standard, making ‘^’ non-standard.
+
A ‘-’ (hyphen) may be matched by including it as the first or last character in the set, such as [-a-z] or [a-z-] meaning any letter from the range expression as well as the hyphen.
+
A ‘]’ (close square bracket) may be matched by including it as the first character in the set, such as []a-z].
+
Other character classes may be used within these square brackets, where a class has the form [:class:], such as [[:alnum:]] which has the same meaning as [a-zA-Z0-9]. See the more detailed manual page extracts at the end of this document.
+
+
+
To refer to all files in the directory ‘a’ which have ‘a’ as the second letter we could use the following pattern:
+
$ ls a/?a*
+
Here the question mark ‘?’ means any character (though we know that all the files begin with ‘a’ in this directory). This is followed by a letter ‘a’ meaning the second letter must be ‘a’. Finally there is an asterisk ‘*’ which means that the rest of the filename can be anything.
+
This command returns 50 filenames like this (only the first two lines are shown):
Note that the use of ‘-w 60’ restricts the number of columns produced by ls.
+
Some notes about pattern matching
+
As already mentioned, there is a certain resemblance between these patterns and regular expressions, which you may have encountered in other HPR episodes such as Learning sed and Learning Awk. The two should not be confused, regular expressions are far more powerful, but are not available in Bash in the same context.
+
The expansion of these patterns takes place on the command line, resulting in an alphabetical list of pathnames, and these are presented to the command. For example, the echo command may be used:
Here the pattern ‘a/?a0*’ was used, meaning files in directory ‘a’ starting with any character, followed by an ‘a’, a zero and then any number of further characters. This was expanded by the Bash shell and the nine pathnames were passed to echo which printed them.
+
It might help to demonstrate this more clearly by using arrays (covered to some extent in the episode entitled Bash parameter manipulation):
Here the array called ‘vec’ is filled with the result of the pathname expansion using the same pattern as before. When we use the substitution syntax ‘${#vec[@]}’ we get the number of elements in the array, and ‘${vec[@]}’ returns all of the elements which are printed by echo.
+
These pattern expansions do not occur when enclosed in single or double quotation marks. Such a pattern is treated simply as a verbatim string:
+
$ echo "a/?a0*"
+a/?a0*
+
Note also that if the pattern does not end with a wildcard then it implies that the final part exactly matches the end of the file name:
All files end with ‘txt’, so ending the pattern with ‘tx’ matches nothing.
+
Later in this episode we will look in more detail at how the expansion process just returns the pattern if there are no matches.
+
+
Using shopt in relation to pathname expansion
+
There are a number of Bash options that affect the way that pathname expansion works. These are referred to in detail in the manual page extracts at the end of these notes.
+
The shopt command is built into the Bash shell. Typing it on its own results in a list of all of the options and their settings:
+
$ shopt
+autocd off
+cdable_vars off
+cdspell off
+...
+
(The rest have been omitted.)
+
Typing shopt with the name of an option returns its current setting:
+
$ shopt dotglob
+dotglob off
+
To turn on an option use shopt -s followed by the option name (‘s’ stands for ‘set’):
+
$ shopt -s dotglob
+
Turning it off is achieved with shopt -u (‘u’ stands for ‘unset’):
+
$ shopt -u dotglob
+
The status of the settings can be reported in a form that can be saved and used as commands by specifying the -p option:
We will look at a subset of the settings controlled by shopt. This subset consists of the settings which are of relevance to pathname expansion.
+
The dotglob option
+
This option controls whether files beginning with a dot (‘.’) are returned by pathname expansion. Normally, they are not.
+
To demonstrate this we first create a file with a name beginning with a dot:
+
$ touch a/.dotfile
+
Such files are called hidden because many parts of the operating system do not show them unless requested.
+
Normally trying to find such a file using a pathname with wildcards fails:
+
$ ls a/*dot*
+ls: cannot access 'a/*dot*': No such file or directory
+$ ls a/?dot*
+ls: cannot access 'a/?dot*': No such file or directory
+$ ls a/[.]dot*
+ls: cannot access 'a/[.]dot*': No such file or directory
+
You might think that adding the -a option to ls (which shows hidden files) might solve the problem. It does not. The issue is that the target file is not returned by the expansion, so the ls command is simply given the pattern, which it treats as a filename and there is no file called “asterisk-d-o-t-asterisk” or any of the others with literal wildcards.
+
However, if dotglob is on then the file becomes visible:
+
$ shopt -s dotglob
+$ ls a/*dot*
+a/.dotfile
+
Of course, the file is visible to ls if the -a option is used (and dotglob is off), and no pathname expansion is used. However, in this case all 1300 other files in the directory would be listed.
+
We’ll list the filenames in one column (-1) and view just the last 3 to demonstrate this:
+
$ ls -1 -a a | tail -3
+az49.txt
+az50.txt
+.dotfile
+
As as Unix newbie I struggled with this “dotfile” issue a lot. I hope this has helped to clarify things for you.
+
The extglob option
+
This option controls whether extended pattern matching features are enabled or not. We will look at these in the next episode.
+
The failglob option
+
This option controls whether an error is produced when a pattern fails to match filenames during pathname expansion.
+
The example shows that when failglob is on the failure of the match is detected early and the command aborted, otherwise the failed pattern is passed to the ls command.
+
$ shopt -s failglob
+$ ls a/aa50*
+a/aa50.txt
+$ ls a/aa51*
+-bash: no match: a/aa51*
+$ shopt -u failglob
+$ ls a/aa51*
+ls: cannot access 'a/aa51*': No such file or directory
+
Note that turning on the failglob option has other effects that might not be very desirable, such as on Tab completion. Use with caution.
+
The globasciiranges option
+
Setting this option on disables the use of the collating sequence of the current locale, and reverts to traditional ASCII. It is relevant to bracket expressions like [a-z].
+
$ mkdir test
+$ cd test
+$ touch á
+$ touch b
+$ shopt -s globasciiranges
+$ ls [a-b]
+b
+$ shopt -u globasciiranges
+$ ls [a-b]
+á b
+$ cd -
+
Setting globasciiranges makes the file called ‘á’ disappear.
+
The globstar option
+
When this option is on the pattern ‘**’ causes recursive scanning of directories when pattern matching.
+
To demonstrate this we will create some extra directories and files:
Note the use of mkdir -p to create all directories at once, the multiple arguments to touch that use brace expansion and the tree command that draws a diagram of the directory structure.
+
Now we can list files in this tree structure by using the ‘**’ pattern. We will use echo again since ls will show all files in directories if their names are returned after expansion:
As we saw when discussing dotglob, a pattern that matches nothing is returned intact, which might result in a command treating it as a pathname.
+
The nullglob option, when on, results in a null string being returned in such cases.
+
$ ls a/*dot*
+ls: cannot access 'a/*dot*': No such file or directory
+$ shopt -s nullglob
+$ echo "[" a/*dot* "]"
+[ ]
+$ shopt -u nullglob
+$ echo "[" a/*dot* "]"
+[ a/*dot* ]
+
Here the ls command used before is demonstrated showing the pattern being returned when the match fails. Then nullglob is turned on and echo is used to demonstrate the null string being returned. We use (quoted) brackets to show this. When nullglob is off then the pattern is returned as before.
+
Conclusion
+
Pathname expansion and a knowledge of the patterns Bash uses is very important for effective use of the Bash command line or for writing Bash scripts. The various options controlled by shopt are less critical, with the exception of dotglob perhaps.
+
In the next (and final) episode about expansion we will look at other factors controlling expansion and will examine the extended pattern matching operators.
Expansion is performed on the command line after it has been split into words. There are seven kinds of expansion performed: brace expansion, tilde expansion, parameter and variable expansion, command substitution, arithmetic expansion, word splitting, and pathname expansion.
+
The order of expansions is: brace expansion; tilde expansion, parameter and variable expansion, arithmetic expansion, and command substitution (done in a left-to-right fashion); word splitting; and pathname expansion.
+
On systems that can support it, there is an additional expansion available: process substitution. This is performed at the same time as tilde, parameter, variable, and arithmetic expansion and command substitution.
+
Only brace expansion, word splitting, and pathname expansion can change the number of words of the expansion; other expansions expand a single word to a single word. The only exceptions to this are the expansions of “$@” and “${name[@]}” as explained above (see PARAMETERS).
After word splitting, unless the -f option has been set, bash scans each word for the characters *, ?, and [. If one of these characters appears, then the word is regarded as a pattern, and replaced with an alphabetically sorted list of filenames matching the pattern (see Pattern Matching below). If no matching filenames are found, and the shell option nullglob is not enabled, the word is left unchanged. If the nullglob option is set, and no matches are found, the word is removed. If the failglob shell option is set, and no matches are found, an error message is printed and the command is not executed. If the shell option nocaseglob is enabled, the match is performed without regard to the case of alphabetic characters. Note that when using range expressions like [a-z] (see below), letters of the other case may be included, depending on the setting of LC_COLLATE. When a pattern is used for pathname expansion, the character “.” at the start of a name or immediately following a slash must be matched explicitly, unless the shell option dotglob is set. When matching a pathname, the slash character must always be matched explicitly. In other cases, the “.” character is not treated specially. See the description of shopt below under SHELL BUILTIN COMMANDS for a description of the nocaseglob, nullglob, failglob, and dotglob shell options.
+
The GLOBIGNORE shell variable may be used to restrict the set of filenames matching a pattern. If GLOBIGNORE is set, each matching filename that also matches one of the patterns in GLOBIGNORE is removed from the list of matches. The filenames “.” and “..” are always ignored when GLOBIGNORE is set and not null. However, setting GLOBIGNORE to a non-null value has the effect of enabling the dotglob shell option, so all other file‐ names beginning with a “.” will match. To get the old behavior of ignoring filenames beginning with a “.”, make “.*" one of the patterns in GLOBIGNORE. The dotglob option is disabled when GLOBIGNORE is unset.
+
Pattern Matching
+
Any character that appears in a pattern, other than the special pattern characters described below, matches itself. The NUL character may not occur in a pattern. A backslash escapes the following character; the escaping backslash is discarded when matching. The special pattern characters must be quoted if they are to be matched literally.
+
The special pattern characters have the following meanings:
+
+
*
+
Matches any string, including the null string. When the globstar shell option is enabled, and * is used in a pathname expansion context, two adjacent *s used as a single pattern will match all files and zero or more directories and subdirectories. If followed by a /, two adjacent *s will match only directories and subdirectories.
+
+
?
+
Matches any single character.
+
+
[…]
+
Matches any one of the enclosed characters. A pair of characters separated by a hyphen denotes a range expression; any character that falls between those two characters, inclusive, using the current locale’s collating sequence and character set, is matched. If the first character following the [ is a ! or a ^ then any character not enclosed is matched. The sorting order of characters in range expressions is determined by the current locale and the values of the LC_COLLATE or LC_ALL shell variables, if set. To obtain the traditional interpretation of range expressions, where [a-d] is equivalent to [abcd], set value of the LC_ALL shell variable to C, or enable the globasciiranges shell option. A - may be matched by including it as the first or last character in the set. A ] may be matched by including it as the first character in the set.
+
Within [ and ], character classes can be specified using the syntax [:class:], where class is one of the following classes defined in the POSIX standard: alnum alpha ascii blank cntrl digit graph lower print punct space upper word xdigit A character class matches any character belonging to that class. The word character class matches letters, digits, and the character _.
+
Within [ and ], an equivalence class can be specified using the syntax [=c=], which matches all characters with the same collation weight (as defined by the current locale) as the character c.
+
Within [ and ], the syntax [.symbol.] matches the collating symbol symbol.
+
+
+
If the extglob shell option is enabled using the shopt builtin, several extended pattern matching operators are recognized. In the following description, a pattern-list is a list of one or more patterns separated by a |. Composite patterns may be formed using one or more of the following sub-patterns:
+
+
?(pattern-list)
+
Matches zero or one occurrence of the given patterns
+
+
(pattern-list*)
+
Matches zero or more occurrences of the given patterns
+
+
+(pattern-list)
+
Matches one or more occurrences of the given patterns
+
+
@(pattern-list)
+
Matches one of the given patterns
+
+
!(pattern-list)
+
Matches anything except one of the given patterns
+
+
+
+
SHELL BUILTIN COMMANDS
+
This is an extract relating to the shopt builtin. Only the options relating to pathname expansion are included. For the full list refer to the Bash manual page.
+
+
shopt [-pqsu] [-o] [optname …]
+
Toggle the values of settings controlling optional shell behavior. The settings can be either those listed below, or, if the -o option is used, those available with the -o option to the set builtin command. With no options, or with the -p option, a list of all settable options is displayed, with an indication of whether or not each is set. The -p option causes output to be displayed in a form that may be reused as input. Other options have the following meanings:
+
-s Enable (set) each optname.
+-u Disable (unset) each optname.
+-q Suppresses normal output (quiet mode); the return status
+ indicates whether the optname is set or unset. If multiple optname
+ arguments are given with -q, the return status is zero if all optnames
+ are enabled; non-zero otherwise.
+-o Restricts the values of optname to be those defined for the -o
+ option to the set builtin.
+
If either -s or -u is used with no optname arguments, shopt shows only those options which are set or unset, respectively. Unless otherwise noted, the shopt options are disabled (unset) by default.
+
The return status when listing options is zero if all optnames are enabled, non-zero otherwise. When setting or unsetting options, the return status is zero unless an optname is not a valid shell option.
+
The list of shopt options is:
+
dotglob If set, bash includes filenames beginning with a `.' in the
+ results of pathname expansion.
+extglob If set, the extended pattern matching features described above
+ under Pathname Expansion are enabled.
+failglob
+ If set, patterns which fail to match filenames during pathname
+ expansion result in an expansion error.
+globasciiranges
+ If set, range expressions used in pattern matching bracket
+ expressions (see Pattern Matching above) behave as if in the
+ traditional C locale when performing comparisons. That is, the
+ current locale's collating sequence is not taken into account,
+ so b will not collate between A and B, and upper-case and
+ lower-case ASCII characters will collate together.
+globstar
+ If set, the pattern ** used in a pathname expansion context
+ will match all files and zero or more directories and
+ subdirectories. If the pattern is followed by a /, only
+ directories and subdirectories match.
+nocaseglob
+ If set, bash matches filenames in a case-insensitive fashion
+ when performing pathname expansion (see Pathname Expansion
+ above).
+nullglob
+ If set, bash allows patterns which match no files (see
+ Pathname Expansion above) to expand to a null string, rather
+ than themselves.
+
+
+
+
+
+
+
The simple concept of a range expression is complicated considerably by the fact that since it was invented many more character sets than plain ASCII have been added. The way in which such ranges are interpreted depends on the current LOCALE. See the Manual Page Extracts section for details.↩
As we saw in the last episode 2278 (and others in this sub-series) there are eight types of expansion applied to the command line in the following order:
+
+
Brace expansion (we looked at this subject in episode 1884)
Pathname expansion (the previous episode 2278 and this one)
+
+
This is the last topic in the (sub-) series about expansion in Bash.
+
In this episode we will look at extended pattern matching as also defined in the “Manual Page Extracts” section at the end of the long notes.
+
Pathname expansion - continued
+
As we saw in the last episode (2278), if we enable the option ‘extglob’ using the ‘shopt’ command we enable a number of additional extended pattern matching features1.
+
In the following description, a pattern-list is a list of one or more patterns separated by a ‘|’. Composite patterns may be formed using one or more of the following sub-patterns:
+
+
?(pattern-list)
+
Matches zero or one occurrence of the given patterns
+
+
*(pattern-list)
+
Matches zero or more occurrences of the given patterns
+
+
+(pattern-list)
+
Matches one or more occurrences of the given patterns
+
+
@(pattern-list)
+
Matches one of the given patterns
+
+
!(pattern-list)
+
Matches anything except one of the given patterns
+
+
+
Notes
+
+
This is a fairly new feature
+
It does not seem to be very well documented
+
There are some similarities to regular expressions
+
+
Warning!: It is not explained explicitly in the Bash manpage but these patterns are applied to each filename. So the pattern:
+
a?(b)c
+
matches a file which begins with ‘a’, is followed by zero or one instance of letter ‘b’ and ends with ‘c’. This means it can match only the filenames ‘abc’ and ‘ac’. This is explained more completely below.
+
Some of the confusion this can cause can be seen in the Stack Exchange questions listed in the Links section below.
+
Examples
+
It turns out that the 33,800 files generated in the last episode are not particularly useful when demonstrating how this feature works. I had not investigated extended glob patterns when I created them unfortunately.
+
Although these files will be used for these examples we will create some more directories and files of a simpler structure, and will turn on ‘extglob’ (assuming it’s not on by default - see the footnote):
+
$ cd Pathname_expansion
+$ mkdir test
+$ touch test/{abbc,abc,ac,axc}
+$ touch test/{x,xx,xxx}.dat
+$ ls -1 test/
+abbc
+abc
+ac
+axc
+x.dat
+xx.dat
+xxx.dat
+$ shopt -s extglob
+
(Some examples here are derived from the Stack Exchange articles mentioned earlier and listed in the Links section.)
+
Example 1 - “match zero or one occurrence”
+
?(pattern-list)
+
In the first demonstration we are asking for zero or one occurrence of ‘b’ between the ‘a’ and ‘b’. We get the files ‘abc’ and ‘ac’ because they match the zero and one cases.
+
$ echo test/a?(b)c
+test/abc test/ac
+
Next we have asked for zero or one letter ‘b’ or letter ‘x’ in the centre, so in this case we also see ‘axc’.
+
$ echo test/a?(b|x)c
+test/abc test/ac test/axc
+
Note that the pattern list has become a little more complex, since we have an alternative character.
+
Now we will move to a more complex example using the large collection of test files.
+
Here we are searching though the directories that start with a vowel for all files that have ‘a’ or ‘b’ as the second letter and ‘01’, ‘10’ or ‘11’ as the next two digits, or files whose second letter is ‘a’ or ‘b’ followed by the digits ‘50’:
The ‘-l 50’ option to ‘ls’ limits the output width for better readability in these notes. We also use ‘-x’ which lists files in row order rather than the default column order so you can read left to right.
+
There are some important points to understand in this example:
+
+
Although we are using the “match zero or one occurrence” sub-pattern there are no cases where there are zero matches. The main benefit we are getting from this feature is that we can use alternation (vertical bar).
+
Use of the ‘*’ wildcard in the sub-pattern avoids the need to be explicit about the ‘.txt’ suffix on the files. The same effect would be achieved with the following:
+
[aeiou]/?(?[ab][01][01]|?[ab]50).txt
+
Adding a ‘*’ wildcard to the end will result in the sub-expression having no effect, and all files in the directories will be returned. That is because the wildcard matches everything! The difference is shown below:
In the next demonstration we are asking for zero or more occurrences of ‘b’ between the ‘a’ and ‘b’. We get the files ‘abbc’, ‘abc’ and ‘ac’ because they match the zero and more than zero cases.
+
$ echo test/a*(b)c
+test/abbc test/abc test/ac
+
Not surprisingly, adding ‘x’ to the list in the sub-expression also returns ‘axc’.
There is no instance of zero ‘x’es followed by ’.dat‘ but a file ’.dat‘ would match, though it would only be shown if ’dotglob’ was set.
+
Applying this sub-pattern to the large collection of test files from the last episode we might want to find all files in directory ‘a’ which begin with two ’a’s and numbers in the range 1-3:
You might expect to get back only ‘a/aa11.txt’, ‘a/aa22.txt’ and ‘a/aa22.txt’ but what is actually returned matches ‘aa’ followed by two numbers, each in the range 1-3. This is the same as:
Just to demonstrate how these sub-patterns work, the following example returns the three files in the first column above:
+
$ ls -1 a/?(*(a)*(1)|*(a)*(2)|*(a)*(3)).txt
+a/aa11.txt
+a/aa22.txt
+a/aa33.txt
+
However, it does not seem very practical!
+
+
Example 3 - “match one or more occurrences”
+
+(pattern-list)
+
The next demonstration requests one or more instances of the letter ‘b’ between the other letters and returns the files ‘abbc’ (two ‘b’s) and ’abc‘ (one’b’):
+
$ echo test/a+(b)c
+test/abbc test/abc
+
As before, adding ‘x’ as an alternative adds file ‘axc’ to the list:
+
$ echo test/a+(b|x)c
+test/abbc test/abc test/axc
+
The following example looks in directories ‘a’ and ‘b’ for files that begin with an ‘a’ or a ‘b’ and end with ‘01.txt’:
This demonstration requests one instance of the letter ‘b’ between the other letters and returns one file ‘abc’:
+
$ echo test/a@(b)c
+test/abc
+
Again, adding ‘x’ as an alternative adds file ‘axc’ to the list:
+
$ echo test/a@(b|x)c
+test/abc test/axc
+
To make some better search targets I ran the following commands:
+
$ mkdir words
+$ while read word; do
+> word=${word%[^a-zA-Z]*}
+> word=${word,,}
+> touch words/$word
+> done < <(shuf -n100 /usr/share/dict/words)
+
+
A directory ‘words’ was created
+
A ‘while’ loop was started to read data into a variable called ‘word’ (this starts a multi-line command so the prompt changes to ‘>’ until the entire loop is typed in)
+
The ‘word’ variable is stripped of all non alphabetic characters at the end to remove trailing apostrophes or ‘'s’ sequences.
+
The ‘word’ variable is converted to lower case
+
The ‘touch’ command makes an empty file named whatever variable ‘word’ contains
+
The loop ends with ‘done’ and the loop is “fed” with data by a process substitution (see show 2045). This runs the ‘shuf’ command to return 100 random words from ‘/usr/share/dict/words’.
+
+
If you try this you will get different words.
+
In my case I used the following command to return words containing one of ‘ee’, ‘oo’, ‘th’ and ‘ss’:
Here we’re looking for files in the directory ‘a’ where the first letter is ‘a’ (they all are) and the second letter is not in the range ‘[c-z]’. The output here shows a subset of what was returned.
+
Let’s finish with an example searching the directory of words. This time we have a pattern within a pattern. The inner pattern is a @(pattern-list) which contains a list of pairs of letters, mostly identical. This pattern is surrounded by asterisk wildcards. The effect of this is to select all words that contain one of the letter pairs.
+
This is enclosed in a !(pattern-list) pattern which negates the inner selection making it match words which do not contain the pairs of letters.
The result is 81 of the 100 words in the directory.
+
Example 6 - use of patterns elsewhere
+
We have seen at various times in this series that glob-style patterns can be used in other contexts. One instance was when manipulating Bash parameters (show 1648):
+
$ x="aaabbbccc"
+$ echo ${x/a/-}
+-aabbbccc
+
Here we created a variable ‘x’ and used pattern substitution to replace the first ‘a’ with a hyphen.
+
$ echo ${x/+(a)/-}
+-bbbccc
+
This time we have used the ‘+(a)’ pattern to match one or more ’a’s. Note that the matched group is replaced by one hyphen. If we want to replace each of the letters with a hyphen then we’d use an alternative type pattern substitution that works through the entire string:
+
$ echo ${x//a/-}
+---bbbccc
+
This time we didn’t want to match a group of letters, so didn’t use extended pattern matching.
+
Another place where extended pattern matching can be used is in ‘case’ statements. I will not go into further detail about this here. However, there is a Stack Exchange question about it listed in the Links section.
+
To summarise: anywhere where a filename-type pattern match is allowed then extended patterns can be used (assuming ‘extglob’ is set).
+
Conclusion
+
Until I started investigating these extended pattern matching features of Bash I did not think I would find them particularly useful. It also took me quite a while to understand how they worked.
+
Now I actually find them quite powerful and will use them in future in scripts I write.
+
Bash extended patterns are similar in concept to Regular Expressions, although they are written totally differently. For example, the Bash pattern: ‘hot*(dog)’ means the same as the RE: ‘hot(dog)*’. They both match the words “hot” and “hotdog”. The difference is that ‘*’ in a RE means that the preceding expression may match zero or more times, and can follow many sorts of expressions. The extended pattern is not quite so general.
+
I hope this episode has helped you understand these Bash features and that you also find them useful.
Expansion is performed on the command line after it has been split into words. There are seven kinds of expansion performed: brace expansion, tilde expansion, parameter and variable expansion, command substitution, arithmetic expansion, word splitting, and pathname expansion.
+
The order of expansions is: brace expansion; tilde expansion, parameter and variable expansion, arithmetic expansion, and command substitution (done in a left-to-right fashion); word splitting; and pathname expansion.
+
On systems that can support it, there is an additional expansion available: process substitution. This is performed at the same time as tilde, parameter, variable, and arithmetic expansion and command substitution.
+
Only brace expansion, word splitting, and pathname expansion can change the number of words of the expansion; other expansions expand a single word to a single word. The only exceptions to this are the expansions of “$@” and “${name[@]}” as explained above (see PARAMETERS).
See the notes for HPR show 2278 for some of the material in this section.
+
After word splitting, unless the -f option has been set, bash scans each word for the characters *, ?, and [. If one of these characters appears, then the word is regarded as a pattern, and replaced with an alphabetically sorted list of filenames matching the pattern (see Pattern Matching below). If no matching filenames are found, and the shell option nullglob is not enabled, the word is left unchanged. If the nullglob option is set, and no matches are found, the word is removed. If the failglob shell option is set, and no matches are found, an error message is printed and the command is not executed. If the shell option nocaseglob is enabled, the match is performed without regard to the case of alphabetic characters. Note that when using range expressions like [a-z] (see below), letters of the other case may be included, depending on the setting of LC_COLLATE. When a pattern is used for pathname expansion, the character “.” at the start of a name or immediately following a slash must be matched explicitly, unless the shell option dotglob is set. When matching a pathname, the slash character must always be matched explicitly. In other cases, the “.” character is not treated specially. See the description of shopt below under SHELL BUILTIN COMMANDS for a description of the nocaseglob, nullglob, failglob, and dotglob shell options.
+
The GLOBIGNORE shell variable may be used to restrict the set of filenames matching a pattern. If GLOBIGNORE is set, each matching filename that also matches one of the patterns in GLOBIGNORE is removed from the list of matches. The filenames “.” and “..” are always ignored when GLOBIGNORE is set and not null. However, setting GLOBIGNORE to a non-null value has the effect of enabling the dotglob shell option, so all other file‐ names beginning with a “.” will match. To get the old behavior of ignoring filenames beginning with a “.”, make “.*" one of the patterns in GLOBIGNORE. The dotglob option is disabled when GLOBIGNORE is unset.
+
Pattern Matching
+
Any character that appears in a pattern, other than the special pattern characters described below, matches itself. The NUL character may not occur in a pattern. A backslash escapes the following character; the escaping backslash is discarded when matching. The special pattern characters must be quoted if they are to be matched literally.
+
The special pattern characters have the following meanings:
+
+
*
+
Matches any string, including the null string. When the globstar shell option is enabled, and * is used in a pathname expansion context, two adjacent *s used as a single pattern will match all files and zero or more directories and subdirectories. If followed by a /, two adjacent *s will match only directories and subdirectories.
+
+
?
+
Matches any single character.
+
+
[…]
+
Matches any one of the enclosed characters. A pair of characters separated by a hyphen denotes a range expression; any character that falls between those two characters, inclusive, using the current locale’s collating sequence and character set, is matched. If the first character following the [ is a ! or a ^ then any character not enclosed is matched. The sorting order of characters in range expressions is determined by the current locale and the values of the LC_COLLATE or LC_ALL shell variables, if set. To obtain the traditional interpretation of range expressions, where [a-d] is equivalent to [abcd], set value of the LC_ALL shell variable to C, or enable the globasciiranges shell option. A - may be matched by including it as the first or last character in the set. A ] may be matched by including it as the first character in the set.
+
Within [ and ], character classes can be specified using the syntax [:class:], where class is one of the following classes defined in the POSIX standard: alnum alpha ascii blank cntrl digit graph lower print punct space upper word xdigit A character class matches any character belonging to that class. The word character class matches letters, digits, and the character _.
+
Within [ and ], an equivalence class can be specified using the syntax [=c=], which matches all characters with the same collation weight (as defined by the current locale) as the character c.
+
Within [ and ], the syntax [.symbol.] matches the collating symbol symbol.
+
+
+
If the extglob shell option is enabled using the shopt builtin, several extended pattern matching operators are recognized. In the following description, a pattern-list is a list of one or more patterns separated by a |. Composite patterns may be formed using one or more of the following sub-patterns:
+
+
?(pattern-list)
+
Matches zero or one occurrence of the given patterns
+
+
*(pattern-list)
+
Matches zero or more occurrences of the given patterns
+
+
+(pattern-list)
+
Matches one or more occurrences of the given patterns
+
+
@(pattern-list)
+
Matches one of the given patterns
+
+
!(pattern-list)
+
Matches anything except one of the given patterns
+
+
+
+
+
+
+
Note that on the versions of GNU Linux that I run (Debian, KDE Neon and Raspbian) ‘extglob’ is on by default. It is actually set in /usr/share/bash-completion/bash_completion which is invoked directly or from /etc/bash_completion which is invoked from the default ~/.bashrc. These are all Debian-derived distributions, so I can’t speak for others.↩
Following on from my last show on filename expansion, concentrating on extended patterns and the extglob option, I was asked a question by Jon Kulp in the comment section.
+
Jon was using ‘ls *(*.mp3|*.ogg)’ to find all OGG and MP3 files in a directory which also held other files. However, when he wanted to copy this subset of files elsewhere he had problems using this expression in an scp command.
+
Having done some investigations to help solve this I thought I’d put what I found into an HPR episode and share it, and this is the show.
+
Test Environment
+
On one of my Raspberry Pis (rpi4) I made some empty test files for the purposes of this show:
I ran the following command on rpi4 to copy selected files from the scptest directory to another Raspberry Pi called rpi5 where I have created a directory called test for the purpose. I have copied my ssh key to that machine already so no password is prompted for.
If I try the equivalent from the other host, pulling the files from rpi4 to rpi5, I don’t get what I might expect:
+
$ scp dave@rpi4:scptest/*(*.mp3|*.ogg) .
+bash: -c: line 0: syntax error near unexpected token `('
+bash: -c: line 0: `scp -f scptest/*(*.mp3|*.ogg)'
+
Running the command again with the -v option we can see that the line ‘scp -f scptest/*(*.mp3,*.ogg)’ is being executed on rpi4 and this is causing the error. The conclusion is that scp itself is doing something that’s not compatible with this expression.
+
My later investigations revealed that extglob is apparently off when this command is being executed, but more of this anon.
+
Alternatives
+
First try - attempting to use extended globs
+
I found an article about this issue on StackExchange with a very comprehensive (if impenetrable) answer.
+
The answer points out that scp simply hands the filename (or expression) to the remote machine where it’s interpreted by the local shell. This could be any shell.
+
The answer suggests that the remote filename could be a command for the remote system, but that doesn’t seem to be the case in my very simple test:
+
$ scp dave@rpi4:'ls' .
+scp: ls: No such file or directory
+
This is probably too naive to work as it is however.
+
It is suggested that the following command will work though. Note that the command contains a newline inside the string passed to ‘scp' before the word 'bash`’. This is necessary for the command to work:
You might want to skip this part since it gets into deep deep Bash and scp magic!
+
This all hinges on the fact that in this case scp works by doing the following:
+
+
It connects to the remote machine using the remote username and host name. It does this using ssh, creating a “tunnel” between the two and running a shell at the remote end.
+
Over the tunnel it issues a command to be run on the remote machine which consists of scp -f FILENAME. The -f option runs scp in “remote” mode. This option is undocumented but can be seen in the source code.
+
The remote end copies the file (or files) back to the local end. It interprets the filename or glob expression using the shell opened on the remote machine.
+
+
The safer_scp function takes advantage of these features. Note that the body of a function can be any compound command. A series of commands enclosed in parentheses is such a compound command, BUT it executes in a sub-shell where the more usual compound command in braces does not. I am not 100% clear why it is written this way but experimentation has shown that without a body in parentheses running the function will disconnect from the remote machine!
+
In the function the variable ‘file’ is set to the first argument. This is then removed from the function argument list with ‘shift’.
+
The variable ‘LC_SCPFILES’ is defined, being set to the piece of the contents of the ‘file’ variable following the colon.
+
The ‘exec’ command runs the rest of the function as a command which replaces the currently executing shell. The command invoked is an ‘scp’ command which passes the environment variable ‘LC_SCPFILES’ to the remote end (using the -o option with ‘SendEnv=LC_SCPFILES’).
+
The arguments to ‘scp’ are two strings. The first is:
The second argument consists of the remaining arguments to safer_scp ("$@").
+
The first argument expands variable ‘file’, returning the first part (by removing the colon and everything after it). It then adds a colon and takes input from ‘/dev/null’. This is then followed by a newline.
+
The rest of the string invokes Bash, setting the ‘extglob’ option with the -O option and reading the following string as a command as specified by the -c option. The command is a further ‘exec’ which runs ‘scp’.
+
This instance of scp uses the undocumented option -f (as mentioned earlier). This tells scp that it is running as the remote instance.
+
The -- (double hyphen) is a convention to tell a program that the options have ended. This protects the following filename (in variable LC_SCPFILES) from possibly being interpreted as options.
+
So, going back to the entire string being handed to the first scp, this does the following:
+
+
It receives the username and host string (as in dave@rpi4) with a colon at the end. The rest of the remote file specification is /dev/null/ and when this is processed the usual remote scp exits.
+
The part after the newline is then executed. It runs Bash with extglob on and invokes another scp which simulates the one which is normally run - but now guaranteed to be in a Bash shell and with extglob on. This then sends the file or files back to the local end after expanding the expanded glob pattern in variable LC_SCPFILES.
+
The exit after the Bash process ensures the process invoked at the remote end shuts down.
+
+
This complex set of events compensates for deficiencies of scp and allows expanded glob patterns to be passed through. However, it’s still error-prone, as will be seen later.
+
The function does actually work, but it’s so obscure and reliant on what seem like edge conditions or hidden features I don’t think it should be used.
+
+
+
+
Second try - just use simpler globs
+
If the requirement is to use an extended glob expression in the solution then this one will not suit. However, if the goal is to copy files, then it will!
This does the job. The expression passed to the remote end is s simple glob pattern (with a brace expansion) and this does not rely on extglob being on at the remote end. It may not work if the glob uses Bash-specific patterns and the remote account uses a shell other than Bash though.
+
Third try - use ‘rsync’ with a filter
+
I have never encountered this issue with ‘scp’ myself when moving files around between servers. I do a lot of file moving both for myself and as an HPR “janitor”. The reason I haven’t seen it is because I usually use ‘rsync’.
+
There is a way of using rsync to achieve what was wanted here, though it does not use extended glob patterns.
+
The ‘rsync’ command can be told to copy files from a directory, including those that match a pattern and to exclude the rest. This is done with filters.
+
The ‘rsync’ command is very powerful and hard to master. In fact there is scope for a whole HPR series on its intricacies. However, we’ll just restrict ourselves to the use of filters here to solve this problem.
+
Here’s what I do:
+
+
Make a filter stored in a file
+
Run ‘rsync’ with the filter
+
+
Making a filter file
+
I created a file called ‘.rsync_test’:
+
$ cat .rsync_test
++ *.mp3
++ *.ogg
+- *
+
Lines beginning with ‘+’ are rules for inclusion. Those beginning with ‘-’ are exclusions. The order is significant.
+
These rules tell ‘rsync’ to include all files ending ‘.mp3’ and ‘.ogg’. Anything else is to be excluded.
-vaP select verbose mode (v), archive mode (a, shorthand for many
+ other options) and show progress (P)
+
+-e ssh use ssh to transfer files
+
+--filter=". .rsync_test" use a filter
+
The filter expression is ‘. .rsync_test’ where the leading ‘.’ is short for ‘merge’ and tells rsync to read filter rules from the file.
+
The arguments are:
+
dave@rpi4:scptest/ the remote host and directory to copy from
+test/ the local directory to copy to
+
It is a good idea to use the ‘-n’ option when setting up such a command, to check that everything works as it should, before running it for real. This option turns on ‘dry-run’ mode where the process is run without actually copying anything.
+
You don’t have to use the filter file. The following command does the same:
The ‘rsync’ tool is a beast and needs careful treatment! Things to be aware of if you want to go further than this simple guide:
+
+
‘rsync’ will traverse a directory hierarchy (it’s recursive)
+
the presence of a trailing slash on the source directory makes it transfer the contents of the directory. Without it the directory itself and its contents will be copied
+
‘rsync’ compares source and destination files. If a file already exists at the destination it will not copy it. However, if the source copy is different from the destination copy ‘rsync’ will transfer differences
+
+
Another digression
+
Since I am already well off the rails with this episode I thought I’d go looking at another area commented on by clacke in the context of show 2293.
+
You are probably aware that file names containing spaces (and other unusual characters) can be difficult to use with commands and programs in Unix and Linux. The question was how scp would behave. I thought I’d do some experimentation with filenames containing spaces.
+
+
+
+
You might want to skip this part since it gets into more of the guts of scp
+
I created a file on rpi4 called “what a horrible filename.txt” and tried to pull it across to rpi5. In each case I used the -v option to scp in order to see all the details of what was going on. Be warned that this generates a lot of output.
+
+
scp -v dave@rpi4:'scptest/what a horrible filename.txt' test/
+This normally is one way filenames with spaces can be dealt with but it fails here because the quotes are removed in the transfer.
+
scp -v dave@rpi4:'scptest/what\ a\ horrible\ filename.txt' test/
+Another way of protecting spaces is to escape each of them with a backslash. This time I have used these inside the string. This works. The quotes are removed but the backslashes remain to protect the spaces.
+
scp -v dave@rpi4:"scptest/what\ a\ horrible\ filename.txt" test/
+Double quotes are equivalent to single ones in this context, so this works in the same way as example 2.
+
scp -v dave@rpi4:scptest/what\ a\ horrible\ filename.txt test/
+This is normally another way that spaces can be protected, but this one fails because the backslashes are removed in the first pass. It is logically equivalent to example 1.
+
scp -v dave@rpi4:scptest/what\\ a\\ horrible\\ filename.txt test/
+Since the scp process removes quotes and backslashes first time round, we’ll try doubling them. This does not work because the remote end gets the filename with literal backslashes and rejects it.
+
scp -v dave@rpi4:scptest/what\\\ a\\\ horrible\\\ filename.txt test/
+Since the last test failed we’ll try trebling the backslashes. This works - rather counter-intuitively I find.
+
scp -v dave@rpi4:'"scptest/what a horrible filename.txt"' test/
+Enclosing one sort of quotes in another should work, and indeed it does. Nested quotes are another solution. However, they must be different types of quotes - single inside double or vice versa.
+
+
You might wonder how the safer_scp function we saw earlier deals with such filenames. I could not get it to transfer the file using any of these formats.
+
However, by modifying it slightly (removing the backslash in front of $LC_SCPFILES) it worked:
This modified function passed all of the tests of plain filenames and glob patterns which I tried. I am still not sure that I’d use it myself though.
+
+
+
+
Conclusion
+
The scp command is built on the original BSD Unix command rcp. I don’t know if this is why it has the quirks we have looked at here, but it does seem to suffer some deficiencies. However, I find it useful and usable most of the time.
+
Using rsync solves a number of the problems scp shows, though it has its own shortcomings. I think a good working knowledge of scp and rsync is important in a Sysadmin’s toolkit and can be of great use to all Unix/Linux users.
+
+
diff --git a/eps/hpr2329/hpr2329_full_shownotes.html b/eps/hpr2329/hpr2329_full_shownotes.html
new file mode 100755
index 0000000..4631338
--- /dev/null
+++ b/eps/hpr2329/hpr2329_full_shownotes.html
@@ -0,0 +1,104 @@
+
+
+
+
+
+
+
+ Building a Digital Clock Kit (HPR Show 2329)
+
+
+
+
+
+
Building a Digital Clock Kit (HPR Show 2329)
+
Dave Morriss
+
Table of Contents
Introduction
+
In April 2017 my son and I decided to each build a digital clock. I had been interested in the idea since seeing Big Clive build one on YouTube, and I think my son had been similarly motivated.
+
He found one, which I have linked to below. It’s smaller than the one shown by Big Clive, comes from ShenZhen China, and currently costs $5.35 (about £4.18) postage free. It takes a long time to arrive, so patience is needed!
+
There are many digital clock kits on eBay, and lots of YouTube videos showing how to build them. I think it’s a great project for someone wanting some soldering practice which is a little more demanding than a beginner project.
+
One type to avoid, I think, is the surface mount type. The one I have uses a through-hole PCB, but I have seen some that provide SMD (surface-mounted device1) components. That type of soldering is beyond me at the moment (though my son has been teaching himself to do it).
+
Unpacking
+
I have included a number of images with the episode. What you see here are thumbnails. Click on them to view the full image.
+
The kit came in a standard bubble-wrap package, with some components bagged or shrink-wrapped together.
+
Picture: The components after un-boxing
+
Contents were a PCB, a 4-digit display, a perspex box, two chips and associated sockets, various components and a USB power lead. Contrary to what is stated on the eBay site, there is a battery included in the kit, a CR1220 Lithium Cell.
+
Picture: The components unwrapped
+
The PCB looked nice, so I photographed front and rear.
+
Pictures: The PCB component side and reverse side
+
Building
+
So, on to building the kit. There are some hints about the sequence of components. I started with the resistors.
+
In the image I show off the PCB holder I got for Christmas!
+
Picture: Starting to add components
+
The PCB holder can’t cope as the board becomes populated since there’s nothing to hold on to.
+
Getting the LDR (light dependent resistor) and the thermistor (temperature sensor) positioned so they protrude though the case needs care. In case you wondered I washed before removing the seal. ☺
+
Pictures: Most components added, front and back view of the PCB
+
I test assembled the case and wrote on each piece where it was to go. It was not too simple to get it right I found.
+
Picture: Putting the case together to see how it fits
+
The fitting of the display needed care since the pins were a bit splayed or bent and needed straightening. Orientation is important of course, as is clearance of components on the board (like the crystal, which could short out pins).
+
Pictures: Test fitting the display, showing clearance for the display pins. Then showing the display soldered on and pins cropped short
+
The clock was a tight fit in the case, which is why it was very important to ensure that everything was properly aligned on the PCB and the clearance between PCB and display chip were as small as possible.
+
Even so, the speaker did not match with the hole in the case. It was made to fit eventually by careful positioning (not brute force, though I was tempted!)
+
Pictures The clock installed in the case, but slightly misaligned. Various views
+
Everything was assembled and power applied, and it worked!
+
Taking photos of it in action is difficult.
+
Picture: It lives!
+
Instructions
+
In short, these are pretty bad! Mine consisted of a single sheet, badly photocopied and difficult to read in places.
+
There is a diagram of the PCB, which is helpful for component placement, as is the list of components with numbers like r1 and r2 matching the picture.
+
The written instructions are pretty bad though. The translation seems very poor using ‘welding’ for ‘soldering’ and many sentences are close to meaningless. Things like:
+
+
+
The pins with diagonal cutting pliers cut short (this step is important) as far as possible, avoid to resist digital tube affect beautiful.
+
Welding digital tube, digital tube must pay attention to the final, or placed on the back of the device can’t welding.
+
+
+
I think this means to ensure all component wires are trimmed as short as possible to avoid touching the display – which fits on the reverse of the PCB. This makes sense because there is a very tiny clearance for the PCB and display or they don’t fit in the case.
+
Picture: The instructions aren’t very good
+
Setting the clock
+
This was a bit of a challenge. I think that the process is:
+
+
Press the set (top) button once to change the time (hours). The change is performed by repeatedly pressing the add button.
+
Press set again to change the minutes of the time, using add as before.
+
Press set again to adjust the hours of the alarm, using add.
+
Press set again to adjust the minutes of the alarm, using add.
+
Press set again to enable/disable the alarm. Its state is shown by the light in the bottom right corner of the display (light off = alarm off)
+
Press set again to enter the mode controlling an hourly beep. Here add changes the left side of the display which defines the hour at which the beep is turned on (e.g. 08 = 8 a.m.)
+
Press set again and the right side of the display flashes, then use add to adjust. This defines the end time for hourly beeps (e.g. 20 = 20:00, 8 p.m.)
+
Press set again then use add to enable/disable this mode. The rightmost light on bottom of the display shows the mode is enabled when on.
+
Press set one more time and the clock is back to normal (normal walking as the instructions say).
+
+
The thing I try not to forget is that the set key needs 9 presses to cycle through all settings and back to normal.
+
Impressions
+
The clock is fine, and not bad for the price. On the other hand you get what you pay for!
+
The timekeeping is OK, though I have seen a bit of drift in the few weeks I have had it. The battery backup is good to have (though the battery used, a CR1220, is not quite as easy to find as most, according to my researches).
+
The clock shows the temperature every 30 seconds then returns to the time display. The temperature sensor is not accurate at all. I have my clock on top of a powered USB hub under one of my monitors. It may be warmer there than elsewhere, but 29°C seems high on a coolish day where other thermometers are reading 24°C in the house. Some means of calibration would be nice.
+
The light sensor (LDR) turns the brightness down when the ambient light level is lower, which is good since the display is very bright.
Take care when searching for “SMD” since it has multiple meanings (I discovered)!↩
+
+
+
+
diff --git a/eps/hpr2329/hpr2329_img_001.png b/eps/hpr2329/hpr2329_img_001.png
new file mode 100755
index 0000000..79cf4ac
Binary files /dev/null and b/eps/hpr2329/hpr2329_img_001.png differ
diff --git a/eps/hpr2329/hpr2329_img_001_tn.png b/eps/hpr2329/hpr2329_img_001_tn.png
new file mode 100755
index 0000000..0c5378a
Binary files /dev/null and b/eps/hpr2329/hpr2329_img_001_tn.png differ
diff --git a/eps/hpr2329/hpr2329_img_002.png b/eps/hpr2329/hpr2329_img_002.png
new file mode 100755
index 0000000..4668949
Binary files /dev/null and b/eps/hpr2329/hpr2329_img_002.png differ
diff --git a/eps/hpr2329/hpr2329_img_002_tn.png b/eps/hpr2329/hpr2329_img_002_tn.png
new file mode 100755
index 0000000..bbf850e
Binary files /dev/null and b/eps/hpr2329/hpr2329_img_002_tn.png differ
diff --git a/eps/hpr2329/hpr2329_img_003.png b/eps/hpr2329/hpr2329_img_003.png
new file mode 100755
index 0000000..acfdac3
Binary files /dev/null and b/eps/hpr2329/hpr2329_img_003.png differ
diff --git a/eps/hpr2329/hpr2329_img_003_tn.png b/eps/hpr2329/hpr2329_img_003_tn.png
new file mode 100755
index 0000000..a95412f
Binary files /dev/null and b/eps/hpr2329/hpr2329_img_003_tn.png differ
diff --git a/eps/hpr2329/hpr2329_img_004.png b/eps/hpr2329/hpr2329_img_004.png
new file mode 100755
index 0000000..de25267
Binary files /dev/null and b/eps/hpr2329/hpr2329_img_004.png differ
diff --git a/eps/hpr2329/hpr2329_img_004_tn.png b/eps/hpr2329/hpr2329_img_004_tn.png
new file mode 100755
index 0000000..5dc2a31
Binary files /dev/null and b/eps/hpr2329/hpr2329_img_004_tn.png differ
diff --git a/eps/hpr2329/hpr2329_img_005.png b/eps/hpr2329/hpr2329_img_005.png
new file mode 100755
index 0000000..b1b8d0f
Binary files /dev/null and b/eps/hpr2329/hpr2329_img_005.png differ
diff --git a/eps/hpr2329/hpr2329_img_005_tn.png b/eps/hpr2329/hpr2329_img_005_tn.png
new file mode 100755
index 0000000..30613e8
Binary files /dev/null and b/eps/hpr2329/hpr2329_img_005_tn.png differ
diff --git a/eps/hpr2329/hpr2329_img_006.png b/eps/hpr2329/hpr2329_img_006.png
new file mode 100755
index 0000000..dcfde60
Binary files /dev/null and b/eps/hpr2329/hpr2329_img_006.png differ
diff --git a/eps/hpr2329/hpr2329_img_006_tn.png b/eps/hpr2329/hpr2329_img_006_tn.png
new file mode 100755
index 0000000..3e0457c
Binary files /dev/null and b/eps/hpr2329/hpr2329_img_006_tn.png differ
diff --git a/eps/hpr2329/hpr2329_img_007.png b/eps/hpr2329/hpr2329_img_007.png
new file mode 100755
index 0000000..8d75b79
Binary files /dev/null and b/eps/hpr2329/hpr2329_img_007.png differ
diff --git a/eps/hpr2329/hpr2329_img_007_tn.png b/eps/hpr2329/hpr2329_img_007_tn.png
new file mode 100755
index 0000000..b972a5d
Binary files /dev/null and b/eps/hpr2329/hpr2329_img_007_tn.png differ
diff --git a/eps/hpr2329/hpr2329_img_008.png b/eps/hpr2329/hpr2329_img_008.png
new file mode 100755
index 0000000..e1a521c
Binary files /dev/null and b/eps/hpr2329/hpr2329_img_008.png differ
diff --git a/eps/hpr2329/hpr2329_img_008_tn.png b/eps/hpr2329/hpr2329_img_008_tn.png
new file mode 100755
index 0000000..52d4d22
Binary files /dev/null and b/eps/hpr2329/hpr2329_img_008_tn.png differ
diff --git a/eps/hpr2329/hpr2329_img_009.png b/eps/hpr2329/hpr2329_img_009.png
new file mode 100755
index 0000000..a660c80
Binary files /dev/null and b/eps/hpr2329/hpr2329_img_009.png differ
diff --git a/eps/hpr2329/hpr2329_img_009_tn.png b/eps/hpr2329/hpr2329_img_009_tn.png
new file mode 100755
index 0000000..b52285c
Binary files /dev/null and b/eps/hpr2329/hpr2329_img_009_tn.png differ
diff --git a/eps/hpr2329/hpr2329_img_010.png b/eps/hpr2329/hpr2329_img_010.png
new file mode 100755
index 0000000..e72eec3
Binary files /dev/null and b/eps/hpr2329/hpr2329_img_010.png differ
diff --git a/eps/hpr2329/hpr2329_img_010_tn.png b/eps/hpr2329/hpr2329_img_010_tn.png
new file mode 100755
index 0000000..55fa20b
Binary files /dev/null and b/eps/hpr2329/hpr2329_img_010_tn.png differ
diff --git a/eps/hpr2329/hpr2329_img_011.png b/eps/hpr2329/hpr2329_img_011.png
new file mode 100755
index 0000000..86e2103
Binary files /dev/null and b/eps/hpr2329/hpr2329_img_011.png differ
diff --git a/eps/hpr2329/hpr2329_img_011_tn.png b/eps/hpr2329/hpr2329_img_011_tn.png
new file mode 100755
index 0000000..6c0e684
Binary files /dev/null and b/eps/hpr2329/hpr2329_img_011_tn.png differ
diff --git a/eps/hpr2329/hpr2329_img_012.png b/eps/hpr2329/hpr2329_img_012.png
new file mode 100755
index 0000000..51365ed
Binary files /dev/null and b/eps/hpr2329/hpr2329_img_012.png differ
diff --git a/eps/hpr2329/hpr2329_img_012_tn.png b/eps/hpr2329/hpr2329_img_012_tn.png
new file mode 100755
index 0000000..5337090
Binary files /dev/null and b/eps/hpr2329/hpr2329_img_012_tn.png differ
diff --git a/eps/hpr2329/hpr2329_img_013.png b/eps/hpr2329/hpr2329_img_013.png
new file mode 100755
index 0000000..cd30b6b
Binary files /dev/null and b/eps/hpr2329/hpr2329_img_013.png differ
diff --git a/eps/hpr2329/hpr2329_img_013_tn.png b/eps/hpr2329/hpr2329_img_013_tn.png
new file mode 100755
index 0000000..64f89a5
Binary files /dev/null and b/eps/hpr2329/hpr2329_img_013_tn.png differ
diff --git a/eps/hpr2329/hpr2329_img_014.png b/eps/hpr2329/hpr2329_img_014.png
new file mode 100755
index 0000000..0f3a39e
Binary files /dev/null and b/eps/hpr2329/hpr2329_img_014.png differ
diff --git a/eps/hpr2329/hpr2329_img_014_tn.png b/eps/hpr2329/hpr2329_img_014_tn.png
new file mode 100755
index 0000000..85e0ed5
Binary files /dev/null and b/eps/hpr2329/hpr2329_img_014_tn.png differ
diff --git a/eps/hpr2329/hpr2329_img_015.png b/eps/hpr2329/hpr2329_img_015.png
new file mode 100755
index 0000000..0552e44
Binary files /dev/null and b/eps/hpr2329/hpr2329_img_015.png differ
diff --git a/eps/hpr2329/hpr2329_img_015_tn.png b/eps/hpr2329/hpr2329_img_015_tn.png
new file mode 100755
index 0000000..ea2380a
Binary files /dev/null and b/eps/hpr2329/hpr2329_img_015_tn.png differ
diff --git a/eps/hpr2329/hpr2329_img_016.png b/eps/hpr2329/hpr2329_img_016.png
new file mode 100755
index 0000000..69a92af
Binary files /dev/null and b/eps/hpr2329/hpr2329_img_016.png differ
diff --git a/eps/hpr2329/hpr2329_img_016_tn.png b/eps/hpr2329/hpr2329_img_016_tn.png
new file mode 100755
index 0000000..a11d117
Binary files /dev/null and b/eps/hpr2329/hpr2329_img_016_tn.png differ
diff --git a/eps/hpr2329/hpr2329_img_017.png b/eps/hpr2329/hpr2329_img_017.png
new file mode 100755
index 0000000..3bcd6aa
Binary files /dev/null and b/eps/hpr2329/hpr2329_img_017.png differ
diff --git a/eps/hpr2329/hpr2329_img_017_tn.png b/eps/hpr2329/hpr2329_img_017_tn.png
new file mode 100755
index 0000000..cebf98c
Binary files /dev/null and b/eps/hpr2329/hpr2329_img_017_tn.png differ
diff --git a/eps/hpr2329/hpr2329_img_018.png b/eps/hpr2329/hpr2329_img_018.png
new file mode 100755
index 0000000..105c746
Binary files /dev/null and b/eps/hpr2329/hpr2329_img_018.png differ
diff --git a/eps/hpr2329/hpr2329_img_018_tn.png b/eps/hpr2329/hpr2329_img_018_tn.png
new file mode 100755
index 0000000..ae01404
Binary files /dev/null and b/eps/hpr2329/hpr2329_img_018_tn.png differ
diff --git a/eps/hpr2339/hpr2339_additions.opml b/eps/hpr2339/hpr2339_additions.opml
new file mode 100755
index 0000000..cdf8d7c
--- /dev/null
+++ b/eps/hpr2339/hpr2339_additions.opml
@@ -0,0 +1,286 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ Wed, 28 Jun 2017 13:12:49 +0000
+ Wed, 28 Jun 2017 13:12:49 +0000
+ Podcast Subscriptions
+
+
diff --git a/eps/hpr2339/hpr2339_full_shownotes.html b/eps/hpr2339/hpr2339_full_shownotes.html
new file mode 100755
index 0000000..3322505
--- /dev/null
+++ b/eps/hpr2339/hpr2339_full_shownotes.html
@@ -0,0 +1,303 @@
+
+
+
+
+
+
+
+ Podcast list additions (HPR Show 2339)
+
+
+
+
+
+
+
+
+
Podcast list additions (HPR Show 2339)
+
Dave Morriss
+
+
+
+
+
+
+
Table of Contents
+
+
+
I did two HPR shows 1516 and 1518 in 2014 about the podcast feeds I’m subscribed to. I have made a few additions since then (and a few subtractions) and I thought I’d share a few of the additions.
+
The list below shows the feed title followed by a number of details taken from the feed or computed from my podcast database. The feeds are order by category (my classification) then by title within them.
Description: Insights into the business world - featuring content from BBC Radio 4’s In Business programme, and also Global Business from the BBC World Service.
Description: Criminal is a podcast about crime. Not so much the “if it bleeds, it leads,” kind of crime. Something a little more complex. Stories of people who’ve done wrong, been wronged, and/or gotten caught somewhere in the middle. We are a proud member of Radiotopia, from PRX, a curated network of extraordinary, story-driven shows. Learn more at radiotopia.fm.
Description: In “Hardcore History” journalist and broadcaster Dan Carlin takes his “Martian”, unorthodox way of thinking and applies it to the past. Was Alexander the Great as bad a person as Adolf Hitler? What would Apaches with modern weapons be like? Will our modern civilization ever fall like civilizations from past eras? This isn’t academic history (and Carlin isn’t a historian) but the podcast’s unique blend of high drama, masterful narration and Twilight Zone-style twists has entertained millions of listeners.
Description: Making It is a biweekly audio podcast hosted by Jimmy Diresta, Bob Clagett and David Picciuto. Three different makers with different backgrounds talking about creativity, design and making things with your bare hands.
Description: Loud, fast-talking and deceptively funny, this politically-independent “forward-thinking pragmatist” looks at the events shaping our world through a uniquely American lens. It’s smarter than you think, and faster than you expect.
Description: This Week in Microbiology (TWiM). A podcast about unseen life on Earth hosted by Vincent Racaniello and friends. Following in the path of his successful shows ‘This Week in Virology’ (TWiV) and ‘This Week in Parasitism’ (TWiP), Racaniello and guests produce an informal yet informative conversation about microbes which is accessible to everyone, no matter what their science background. As a science Professor at Columbia University, Racaniello has spent his academic career directing a research laboratory focused on viruses. His enthusiasm for teaching inspired him to reach beyond the classroom using new media. TWiM is for everyone who wants to learn about the science of microbiology in a casual way. While there are no exams or pop quizzes, TWiM does encourage interaction with the audience via comments on specific episodes, email and Skype. Listeners can also use www.MicrobeWorld.org to suggest topics for the show by submitting articles, papers, video and images to the site and tagging them with “TWiM”. Each week Racaniello will view the tagged content and select items for discussion. For questions and/or feedback please email ccondayan@asmusa.org.
Description: The Weekly Space Hangout is recorded every Friday on the Cosmoquest G+ page. Journalists and astronomers discuss the top stories of the week in space and science, and answer audience questions. These are the audio files of those recordings.
Description: Welcome to the Edinburgh Skeptics Society podcast. We’ll be bringing you talks from our guest speakers on a variety of topics in our Skeptics in the Pub podcast. There’ll be talks from areas such as science, social issues, politics, and lots more, all with a view to promoting reason and critical thinking. You’ll also be able to see what makes our guest speakers tick with our 10 Questions segment, and recordings of our Edinburgh Fringe Festival and Edinburgh International Science Festival events. Do make sure you rate or review us, and get in touch and let us know what we’re doing right (or wrong!). Email us at podcast@edskeptics.co.uk
Description: Listen to learn the real state of science behind astronomy-, physics-, and geology-related creationist claims, hoaxes, conspiracy theories, misconceptions, and bad or incomplete media reporting.
Description: The Pen Addict is a weekly fix for all things stationery. Pens, pencils, paper, ink – you name it, and Brad Dowdy and Myke Hurley are into it. Join as they geek out over the analog tools they love so dearly. Hosted by Myke Hurley and Brad Dowdy.
Description: A weekly conversation that gets to the heart of open source technologies and the people who create them. This show features in-depth interviews with the best and brightest software engineers, hackers, leaders, and innovators. Hosts Adam Stacoviak and Jerod Santo face their imposter syndrome so you don’t have to. This is a polyglot podcast. All programming languages, platforms, and communities are welcome. Open source moves fast. Keep up.
Description: From the independent magazine for the Ubuntu Linux community. The Full Circle Weekly News is a short podcast with just the news. No chit-chat. No time wasting. Just the latest FOSS/Linux/Ubuntu news.
+
+
diff --git a/eps/hpr2348/hpr2348_example_vimrc_5 b/eps/hpr2348/hpr2348_example_vimrc_5
new file mode 100755
index 0000000..a475a2e
--- /dev/null
+++ b/eps/hpr2348/hpr2348_example_vimrc_5
@@ -0,0 +1,66 @@
+" This version is released with Vim Hints 005
+" -------------------------------------------
+" Ensure Vim runs as Vim
+set nocompatible
+
+" Keep a backup file
+set backup
+
+" Keep change history
+set undodir=~/.vim/undodir
+set undofile
+
+" Show the line,column and the % of buffer
+set ruler
+
+" Always show a status line per window
+set laststatus=2
+
+" Show Insert, Replace or Visual on the last line
+set showmode
+
+" Stop beeping! (Flash the screen instead)
+set visualbell
+
+" Show incomplete commands
+set showcmd
+
+" Increase the command history
+set history=100
+
+" Turn off case in searches
+set ignorecase
+
+" Turn case-sensitive searches back on if there are capitals in the target
+set smartcase
+
+" Do incremental searching
+set incsearch
+
+" Set the search scan to wrap around the file
+set wrapscan
+
+" Highlight all matches when searching
+set hlsearch
+
+" Map (redraw screen) to also turn off search highlighting until the
+" next search
+nnoremap :nohl
+
+" Allow extra movement in INSERT mode
+set backspace=indent,eol,start
+
+" Enable syntax highlighting
+syntax on
+
+" Indent automatically
+set autoindent
+
+" Wrap at 78 characters
+set textwidth=78
+
+" In Insert mode use numbers of spaces instead of tabs
+set expandtab
+
+" Define number of spaces to use for indenting
+set shiftwidth=4
diff --git a/eps/hpr2348/hpr2348_full_shownotes.epub b/eps/hpr2348/hpr2348_full_shownotes.epub
new file mode 100755
index 0000000..4fd4dfb
Binary files /dev/null and b/eps/hpr2348/hpr2348_full_shownotes.epub differ
diff --git a/eps/hpr2348/hpr2348_full_shownotes.html b/eps/hpr2348/hpr2348_full_shownotes.html
new file mode 100755
index 0000000..e2affb0
--- /dev/null
+++ b/eps/hpr2348/hpr2348_full_shownotes.html
@@ -0,0 +1,336 @@
+
+
+
+
+
+
+
+ Vim Hints 005 (HPR Show 2348)
+
+
+
+
+
+
+
+
+
Vim Hints 005 (HPR Show 2348)
+
Dave Morriss
+
+
+
+
+
+
+
Table of Contents
+
+
+
Vim Hints is back!
+
Oops! Where did half of 2015, all of 2016 and the first half of 2017 go?
+
Well, life got in the way, plus motivation dwindled somewhat. This series is very demanding - the sed series was a walk in the park compared to tackling the continental-scale landscape of Vim!
+
Still, the original goal was to try and introduce the really useful features of Vim and to make it manageable for everyday use. The hope was, and still is, that the series could get people started on their own journeys through its marvels.
+
Also, with the currently circulating StackOverflow article on “How to exit the Vim editor?”, it’s worth pointing out that we dealt with that subject in episode 1, and this issue is revealed as the ridiculous meme that it really is!
+
Quick recap
+
To recap, the last episode of this planned series was in March 2015. Here’s a list of links to all of the episodes so far:
Let’s briefly describe what was covered in these episodes to set the context.
+
So far we have looked at very basic editing in episode 1, where we mentioned modes Normal, Insert and Command modes.
+
In episode 2 we looked at Vim’s backup mechanism, undoing and redoing changes, and file recovery in the event of a problem. We started using the .vimrc configuration file.
+
We began looking at movement commands in Normal mode in episode 3, and beefed up the configuration file somewhat.
+
More movement commands were covered in episode 4 as well as searching. We began looking at commands that make changes, adding, inserting, deleting and changing text in various ways. The concept of doing these things with various movements was covered. Again, a number of useful options for the configuration file were introduced.
+
Copying and pasting
+
So far we have inserted, changed and deleted text, all in Normal mode. Now we want to look at how to copy existing text and how to paste text. See the Vim Help (type :help change.txt) or the online documentation here for the full details.
+
Copying
+
The yy command in Normal mode yanks or copies lines. Just like dd (which we saw in episode 4), if preceded by a count it will yank more than the default one line.
+
Note that the Y command is a synonym for yy. It doesn’t do the equivalent of what C and D do, yank from the current position to the end of the line.
+
The y command yanks or copies characters. Like c (change) and d (delete) (seen in episode 4) it needs a movement command to follow. The table below shows some examples of the operator+movement combinations:
+
+
+
+
Command
+
Action
+
+
+
+
+
yw
+
Yank from the cursor to before the start of the next word
+
+
+
ye
+
Yank from the cursor to the end of the next word
+
+
+
y$
+
Yank from the cursor to the end of the line (same as yy)
+
+
+
y0
+
Yank from before the cursor to the beginning of the line
+
+
+
y)
+
Yank from the cursor to the end of the sentence
+
+
+
+
Pasting
+
Having copied text (or having deleted or changed it) it’s then possible to paste it (or put it as the Vim documentation defines it). The various delete commands save the last text that was deleted, and the change commands save the last text as it was before it was changed.
+
The p command in Normal mode puts (pastes) text after the cursor. The P command puts (pastes) text before the cursor. In both cases the cursor is left on the last character of the pasted text. Both commands can be preceded by a count, resulting in the text being pasted multiple times.
+
A number of Vim commands can be preceded by g which makes changes to their effects. We will visit these as we introduce new features.
+
In the case of these paste commands the effects are:
+
+
+
+
Command
+
Action
+
+
+
+
+
gp
+
Just like p but leave the cursor after the pasted text
+
+
+
gP
+
Just like P but leave the cursor after the pasted text
+
+
+
+
Registers
+
Deleted, changed and copied text is stored in a register. It’s a default register if not specified, but a number of other registers exist in Vim and can be chosen as the source and destination of commands.
+
+
This is a large topic, and this information is just a forward reference to the subject which we’ll look at in more detail in a forthcoming show in the series.
+
+
Examples of cutting, copying and pasting
+
+
A simple use of cut and paste is the sequence xp. This swaps the character under the cursor with the one after it. It’s really useful if, like me, your fingers keep typing teh instead of the, for example.
+
The sequence dwwP is useful for swapping words. Remember that dw deletes the current word (assuming the cursor is on the first character), then the next w moves forward one word and P pastes the deleted word in front of it. This is not the most robust and reliable way of doing this, but hopefully it makes the point.
+
The sequence ywP yanks the current word (again assuming the cursor is on the first character) and pastes it in front of the cursor, thereby duplicating the word.
+
+
Text objects again
+
We saw in episode 4 that Vim has the concept of text objects, and we looked at sentences and paragraphs, and at movements and actions relating to them. There are more than these, and in this episode we’ll look at them in the context of commands. We’ll just touch the surface of this subject for now, and will come back for a deeper look in a later episode. See the Vim Help (type :help motion.txt) or the online documentation here for the full details.
+
Defining an inner object
+
We have seen commands like dw and yw which have an effect relating to a word. The command yw means “yank from the cursor position to before the beginning of the next word”. However, yiw, means “yank inner word” and has the effect of yanking from the start of the current word to the end of that word. That is, it doesn’t matter where in the word the cursor is positioned.
+
Similarly diw deletes the entire word the cursor is positioned on. The “inner word” means that it does not include any non-word character after the word. In fact, the same applies to leading non-word characters too.
+
There are many objects that can be used with this “inner” text selection mechanism, including sentences and paragraphs. We will not look at all of them in this episode, but will revisit the subject again later.
+
Defining an object
+
This terminology is a little confusing, but it exists because the effect is achieved by using an a (“an object”) rather than the i for “inner object”. (I like to think of the a as signifying all as a way to remember it.)
+
Here yaw includes all trailing white space (if there is any) and leading white space if there was no trailing space. Again the effect works regardless of where the cursor is positioned in the word.
+
There are many objects that can be used with this type of text selection mechanism, including sentences and paragraphs. We will not look at all of them in this episode, but will revisit the subject again later.
+
Examples
+
The following example shows two rows of numbers which represent the column number in the line of text which follows. We will use these lines to show the result of actions at certain cursor positions:
+
1 2 3 4 5 6
+123456789012345678901234567890123456789012345678901234567890
+Hacker Public Radio is dedicated to sharing knowledge.
+
+
Cursor at column 10 (on the b of Public). Typing diw here results in the deletion of “Public” leaving the leading and trailing spaces.
+
+
Hacker Radio is dedicated to sharing knowledge.
+
+
Cursor at column 10. Typing daw here results in the deletion of “Public” including the trailing space.
+
+
Hacker Radio is dedicated to sharing knowledge.
+
+
Cursor at column 48 (on the w of knowledge). Typing diw here results in the deletion of knowledge but leaves the leading space and the terminating full stop.
+
+
Hacker Public Radio is dedicated to sharing .
+
+
Cursor at column 48. Typing daw here results in the deletion of knowledge and the leading space, thereby terminating the sentence.
+
+
Hacker Public Radio is dedicated to sharing.
+
A few other objects
+
+
+
+
Command
+
Action
+
+
+
+
+
yis
+
Yank inner sentence (start to ending punctuation)
+
+
+
yas
+
Yank a sentence (including trailing white spaces)
+
+
+
yip
+
Yank inner paragraph (start to before terminating blank line)
+
+
+
yap
+
Yank a paragraph (including trailing blank line)
+
+
+
+
More changes
+
Joining lines
+
There are times when two lines adjacent to one another might need to be joined together. This can be achieved with the J command (remember that j is a cursor movement command). The J command can be preceded by a count to join multiple lines.
+
The J command places a space between the joined lines. It is removing the <EOL> (end of line) characters between lines and replacing them with spaces. More spaces may be inserted here if certain options are enabled which add double spaces after the end of a sentence.
+
The gJ command (remember g is often used for variants of certain commands) joins lines like J does but without adding spaces.
+
Example
+
Given the following three lines, we will demonstrate the results of the two commands:
+
Hacker
+Public
+Radio
+
Positioning the cursor on the first line and typing 3J results in:
+
Hacker Public Radio
+
Whereas the same with 3gJ results in:
+
HackerPublicRadio
+
+
Configuration file
+
The configuration file we have built so far (see episode 4) has grown moderately long, and it will get longer in this episode. I order to simplify matters this is now included as a separate file: example_vimrc_5.
+
Full information on the options available in Vim can be found in the Vim Help (type :h options.txt) or online here.
+
Syntax highlighting
+
This turns on Vim’s syntax highlighting features. We haven’t really looked at these in detail, but it’s useful to have some colouring and highlighting wherever it’s available.
+
syntax on
+
Indenting
+
If you have indented a line while typing, and start a new line then Vim will automatically indent that line the same as the original. This is very useful when writing a program or when preparing text.
+
This feature is turned on with the command:
+
set autoindent
+
The abbreviation for the command is is se ai and the effect can be reversed with set noautoindent or se noai.
+
Automatic wrapping
+
As you are typing in Insert mode Vim can wrap automatically to the next line (by automatically adding the necessary line break). It does this when the defined line width has been reached and the current word is completed. It does not split words.
+
Note that if you add to an existing line and make it exceed the text width, Vim will not wrap the line.
+
The maximum width of text which triggers wrapping can be defined with the command:
+
set textwidth=NNN
+
For example:
+
set textwidth=78
+
The abbreviation for the command is is se tw=NNN and the text width feature can be turned off with set textwidth=0 or se tw=0. The text width feature is turned off by default.
+
Tabs or spaces
+
In Insert mode, pressing the TAB key inserts a TAB character and moves the cursor to the appropriate tab stop.
+
The subject of whether to use TAB characters or spaces to indent programs can generate much discussion. We will look at this matter in more depth later in the series, but for now I suggest we make Vim replace TAB characters with spaces by default and make indenting to be in increments of 4 columns.
+
This can be achieved with two configuration options: expandtab and shiftwidth.
+
The expandtab option forces all TAB characters to be replaced by the appropriate number of spaces in Insert mode. This is a Boolean option, to to turn it on you need:
+
set expandtab
+
To turn it off use:
+
set noexpandtab
+
The command can be abbreviated to se et or se noet.
+
Note that if the file you are editing already contains TAB characters this setting will not affect them. There is a command mode command :retab which can be used to replace all TAB characters but we’ll look at that later.
+
The shiftwidth option controls the number of spaces to use for autoindenting. It takes an argument - the number of spaces:
+
set shiftwidth=4
+
This sets the autoindent step to 4 spaces.
+
The option can be abbreviated to se sw=4.
+
As already mentioned, this is not the whole story, but we’ll leave this subject to be developed in an upcoming episode. (Hint: we’ll be looking at tabstop and softtabstop later).
+
We will also look at the use of CTRL+D (<C-D>) and CTRL+T (<C-T>) to delete and add indents to the automatically created ones.
+
Turning off the search highlight
+
It was mentioned in episode 4 that when searching for text with incsearch and hlsearch on all of the matches would be highlighted. These strings remain highlighted until another search is executed or the :nohl command is issued.
+
One way to simplify the cancellation of the highlight is shown in this episode’s example configuration file. It uses a feature we have not seen yet, the mapping of a key to a command. We will look at this in detail in a later episode. Suffice it to say that if you add the following to your .vimrc you will be able to turn off the highlighting by typing CTRL-L, which will also refresh (redraw) the window:
+
nnoremap <C-L> :nohl<CR><C-L>
+
+
Summary
+
+
Copying
+
+
yy or Y to copy a line
+
ymotion to copy text up to a movement target
+
+
Pasting
+
+
p puts (pastes) after the cursor
+
P puts (pastes) before the cursor
+
gp and gP like p and P but leave the cursor after the pasted text
+
+
Text objects with i and a
+
+
i means the inner object
+
a means all of the object
+
+
Joining lines
+
+
J joins with spaces
+
gJ joins without spaces
+
+
+
Configuration file
+
" Previous stuff omitted for now, see 'example_vimrc_5'
+
+" Enable syntax highlighting
+syntax on
+
+" Indent automatically
+set autoindent
+
+" Wrap at 78 characters
+set textwidth=78
+
+" In Insert mode use numbers of spaces instead of tabs
+set expandtab
+
+" Define number of spaces to use for indenting
+set shiftwidth=4
+
+" Highlight searches (use <C-L> to temporarily turn off
+" highlighting; see the mapping of <C-L> below)
+set hlsearch
+
+" Map <C-L> (redraw screen) to also turn off search highlighting
+" until the next search
+nnoremap <C-L> :nohl<CR><C-L>
+
+
diff --git a/eps/hpr2438/hpr2438_continue_example.awk b/eps/hpr2438/hpr2438_continue_example.awk
new file mode 100755
index 0000000..65adce1
--- /dev/null
+++ b/eps/hpr2438/hpr2438_continue_example.awk
@@ -0,0 +1,14 @@
+#!/usr/bin/awk -f
+
+#
+# Loop, printing numbers from 0-20, except for 5
+# (From the GNU Awk User's Guide)
+#
+BEGIN {
+ for (x = 0; x <= 20; x++) {
+ if (x == 5)
+ continue
+ printf "%d ", x
+ }
+ print ""
+}
diff --git a/eps/hpr2438/hpr2438_divisor.awk b/eps/hpr2438/hpr2438_divisor.awk
new file mode 100755
index 0000000..7b870c8
--- /dev/null
+++ b/eps/hpr2438/hpr2438_divisor.awk
@@ -0,0 +1,28 @@
+#!/usr/bin/awk -f
+
+# find smallest divisor of num
+{
+ num = $1
+
+ #
+ # Make an infinite loop using the for loop
+ #
+ for (divisor = 2; ; divisor++) {
+ #
+ # If the number is divisible by 'divisor' then we're done
+ #
+ if (num % divisor == 0) {
+ printf "Smallest divisor of %d is %d\n", num, divisor
+ break
+ }
+
+ #
+ # If the value of 'divisor' has got too large the number has no
+ # divisors and is therefore a prime number
+ #
+ if (divisor * divisor > num) {
+ printf "%d is prime\n", num
+ break
+ }
+ }
+}
diff --git a/eps/hpr2438/hpr2438_divisor.out b/eps/hpr2438/hpr2438_divisor.out
new file mode 100755
index 0000000..3babe11
--- /dev/null
+++ b/eps/hpr2438/hpr2438_divisor.out
@@ -0,0 +1,4 @@
+$ echo 67 | ./divisor.awk
+67 is prime
+$ echo 69 | ./divisor.awk
+Smallest divisor of 69 is 3
diff --git a/eps/hpr2438/hpr2438_full_shownotes.epub b/eps/hpr2438/hpr2438_full_shownotes.epub
new file mode 100755
index 0000000..54452c3
Binary files /dev/null and b/eps/hpr2438/hpr2438_full_shownotes.epub differ
diff --git a/eps/hpr2438/hpr2438_full_shownotes.html b/eps/hpr2438/hpr2438_full_shownotes.html
new file mode 100755
index 0000000..ffab435
--- /dev/null
+++ b/eps/hpr2438/hpr2438_full_shownotes.html
@@ -0,0 +1,340 @@
+
+
+
+
+
+
+
+ Gnu Awk - Part 8 (HPR Show 2438)
+
+
+
+
+
+
+
+
+
+
Gnu Awk - Part 8 (HPR Show 2438)
+
Dave Morriss
+
+
+
+
+
+
+
Table of Contents
+
+
+
Introduction
+
This is the eighth episode of the “Learning Awk” series that Mr. Young and I are doing.
+
Recap of the last episode
+
+
The while loop: tests a condition and performs commands while the test returns true
+
The do while loop: performs commands after the do, then tests afterwards, repeating the commands while the test is true.
+
The for loop (type 1): initialises a variable, performs a test, and increments the variable all together, performing commands while the test is true.
+
The for loop (type 2): sets a variable to successive indices of an array, preforming a collection of commands for each index.
+
+
These types of loops were demonstrated by examples in the last episode.
+
Note that the example for ‘do while’ was an infinite loop (perhaps as a test of the alertness of the audience!):
+
#!/usr/bin/awk -f
+BEGIN {
+
+ i=2;
+ do {
+ print "The square of ", i, " is ", i*i;
+ i = i + 1
+ }
+ while (i != 2)
+
+exit;
+}
+
The condition in the while is always true:
+
The square of 2 is 4
+The square of 3 is 9
+The square of 4 is 16
+The square of 5 is 25
+The square of 6 is 36
+The square of 7 is 49
+The square of 8 is 64
+The square of 9 is 81
+The square of 10 is 100
+...
+The square of 1269630 is 1611960336900
+The square of 1269631 is 1611962876161
+The square of 1269632 is 1611965415424
+The square of 1269633 is 1611967954689
+The square of 1269634 is 1611970493956
+...
+
The variable i is set to 2, the print is executed, then i is set to 3. The test “i != 2” is true and will be ad infinitum.
+
Some more statements
+
We will come back to loops later in this episode, but first this seems like a good point to describe another statement: the switch statement.
+
The switch statement
+
This is specific to gawk, and can be disabled if non-GNU awk-compatibility is required. The switch statement in gawk is very similar to the one in C and many other languages.
The ‘expression’ part is an expression, which returns a numeric or string result. The ‘value’ part after the case is a numeric or string constant or a regular expression.
+
The expression is evaluated and the result matched against the case values in turn. If there is a match the case-body statements are executed. If there is no match the default-body statements are executed.
+
The following example is included as one of the files associated with this show, called switch_example.awk:
+
#!/usr/bin/awk -f
+
+#
+# Example of the use of 'switch' in GNU Awk.
+#
+# Should be run against the data file 'file1.txt' included with the second
+# show in the series: http://hackerpublicradio.org/eps/hpr2129/file1.txt
+#
+NR>1{
+ printf"The %s is classified as: ",$1
+
+ switch ($1) {
+ case "apple":
+ print"a fruit, pome"
+ break
+ case "banana":
+ case "grape":
+ case "kiwi":
+ print"a fruit, berry"
+ break
+ case "strawberry":
+ print"not a true fruit, pseudocarp"
+ break
+ case "plum":
+ print"a fruit, drupe"
+ break
+ case "pineapple":
+ print"a fruit, fused berries (syncarp)"
+ break
+ case "potato":
+ print"a vegetable, tuber"
+ break
+ default:
+ print"[unclassified]"
+ }
+}
+
The result of running this script against the “fruit” file presented in show 2129 is the following (switch_example.out):
+
The apple is classified as: a fruit, pome
+The banana is classified as: a fruit, berry
+The strawberry is classified as: not a true fruit, pseudocarp
+The grape is classified as: a fruit, berry
+The apple is classified as: a fruit, pome
+The plum is classified as: a fruit, drupe
+The kiwi is classified as: a fruit, berry
+The potato is classified as: a vegetable, tuber
+The pineapple is classified as: a fruit, fused berries (syncarp)
+
+
What this simple example does is:
+
+
It ignores the first line of the file (a header)
+
It prints the first field (the name of a fruit - mostly) in the string “The %s is classified as:”. There is no newline so whatever is printed next is appended to the line.
+
It uses the first field in a switch statement. Each case is an exact match with the contents of the field. If there is a match a print statement is used to print out the Botanical classification. If there are no matches then the default instance would print “[unclassified]”, but that doesn’t happen in this example.
+
All print statements are followed by break. If this hadn’t been there the next case would be executed and so forth. This can be desirable in some instances. See the next section for a discussion of break.
+
Note that banana, grape and kiwi are all Botanically classified as a berry, so there are three case parts associated with one print.
+
+
The break statement
+
This statement is mainly for “breaking out of” a for, while or do-while loop, though, as we have seen it can interrupt the flow of execution in a switch statement also. Outside of these statements break has no effect.
+
In a loop a break statement is often used where it’s not possible to determine the number of iterations of the loop beforehand. Invoking break completely terminates the enclosing loop (relevant when there are nested loops, or loops within loops).
+
The following example (available for download as divisor.awk) is from the Gnu Awk manual and shows a method of finding the smallest divisor:
+
#!/usr/bin/awk -f
+
+# find smallest divisor of num
+{
+ num =$1
+
+ #
+ # Make an infinite loop using the for loop
+ #
+ for (divisor =2;; divisor++) {
+ #
+ # If the number is divisible by 'divisor' then we're done
+ #
+ if (num % divisor ==0) {
+ printf"Smallest divisor of %d is %d\n", num, divisor
+ break
+ }
+
+ #
+ # If the value of 'divisor' has got too large the number has no
+ # divisors and is therefore a prime number
+ #
+ if (divisor * divisor > num) {
+ printf"%d is prime\n", num
+ break
+ }
+ }
+}
+
I have added some comments to this script to (hopefully) make it clearer.
+
Running this in a pipeline with the number presented to it as shown results in the following type of output (divisor.out):
+
$ echo 67 | ./divisor.awk
+67 is prime
+$ echo 69 | ./divisor.awk
+Smallest divisor of 69 is 3
+
The continue statement
+
This is similar to break in that it is used a for, while or do-while loop. It is not relevant in switch statements however.
+
Invoking continue skips the rest of the enclosing loop and begins the next cycle.
+
The following example (available for download as continue_example.awk) is from the Gnu Awk manual and demonstrates a possible use of continue:
+
#!/usr/bin/awk -f
+
+#
+# Loop, printing numbers from 0-20, except for 5
+# (From the GNU Awk User's Guide)
+#
+BEGIN{
+ for (x =0; x <=20; x++) {
+ if (x ==5)
+ continue
+ printf"%d ", x
+ }
+ print""
+}
+
The next statement
+
This statement is not related to loops in the same way as break and continue but to the main record processing cycle of Awk. The next statement causes Awk to stop processing the current input record and go on to the next one.
+
As we know from earlier episodes in this series, Awk reads records from its input stream and applies rules to them. The next statement stops the execution of further rules for the current record, and moves on to the next one.
+
The following example (available for download as next_example.awk) is demonstrates a use of next:
+
#!/usr/bin/awk -f
+
+#
+# Ignore the header
+#
+NR==1{next}
+
+#
+# If field 2 (colour) is less than 6 characters then save it with its line
+# number and skip it
+#
+length($2) <6{
+ skip[NR] =$0
+ next
+}
+
+#
+# It's not the header and the colour name is > 6 characters, so print the line
+#
+{
+ print
+}
+
+#
+# At the end show what was skipped
+#
+END{
+ printf"\nSkipped:\n"
+ for (n in skip)
+ print n": "skip[n]
+}
+
+
The script uses next in the first rule to avoid the first line of the file (a header).
+
The second rule skips lines where the colour name is less than 6 characters long, but it also saves that line in an array called skip using the line number as the key (index).
+
The third rule prints anything it sees, but it will not be invoked if either rule 1 or rule 2 cause it to be skipped.
+
Finally, and END rule prints the contents of the array.
+
+
Running this with the file we have used many times before, file1.txt, results in the following output (next_example.out):
+
$ next_example.awk file1.txt
+banana yellow 6
+grape purple 10
+plum purple 2
+pineapple yellow 5
+
+Skipped:
+2: apple red 4
+4: strawberry red 3
+6: apple green 8
+8: kiwi brown 4
+9: potato brown 9
+
+
diff --git a/eps/hpr2438/hpr2438_full_shownotes.pdf b/eps/hpr2438/hpr2438_full_shownotes.pdf
new file mode 100755
index 0000000..5feb523
Binary files /dev/null and b/eps/hpr2438/hpr2438_full_shownotes.pdf differ
diff --git a/eps/hpr2438/hpr2438_next_example.awk b/eps/hpr2438/hpr2438_next_example.awk
new file mode 100755
index 0000000..7c2041f
--- /dev/null
+++ b/eps/hpr2438/hpr2438_next_example.awk
@@ -0,0 +1,31 @@
+#!/usr/bin/awk -f
+
+#
+# Ignore the header
+#
+NR == 1 { next }
+
+#
+# If field 2 (colour) is less than 6 characters then save it with its line
+# number and skip it
+#
+length($2) < 6 {
+ skip[NR] = $0
+ next
+}
+
+#
+# It's not the header and the colour name is > 6 characters, so print the line
+#
+{
+ print
+}
+
+#
+# At the end show what was skipped
+#
+END {
+ printf "\nSkipped:\n"
+ for (n in skip)
+ print n": "skip[n]
+}
diff --git a/eps/hpr2438/hpr2438_next_example.out b/eps/hpr2438/hpr2438_next_example.out
new file mode 100755
index 0000000..9d52224
--- /dev/null
+++ b/eps/hpr2438/hpr2438_next_example.out
@@ -0,0 +1,12 @@
+$ next_example.awk file1.txt
+banana yellow 6
+grape purple 10
+plum purple 2
+pineapple yellow 5
+
+Skipped:
+2: apple red 4
+4: strawberry red 3
+6: apple green 8
+8: kiwi brown 4
+9: potato brown 9
diff --git a/eps/hpr2438/hpr2438_switch_example.awk b/eps/hpr2438/hpr2438_switch_example.awk
new file mode 100755
index 0000000..479d883
--- /dev/null
+++ b/eps/hpr2438/hpr2438_switch_example.awk
@@ -0,0 +1,36 @@
+#!/usr/bin/awk -f
+
+#
+# Example of the use of 'switch' in GNU Awk.
+#
+# Should be run against the data file 'file1.txt' included with the second
+# show in the series: http://hackerpublicradio.org/eps/hpr2129/file1.txt
+#
+NR > 1 {
+ printf "The %s is classified as: ",$1
+
+ switch ($1) {
+ case "apple":
+ print "a fruit, pome"
+ break
+ case "banana":
+ case "grape":
+ case "kiwi":
+ print "a fruit, berry"
+ break
+ case "strawberry":
+ print "not a true fruit, pseudocarp"
+ break
+ case "plum":
+ print "a fruit, drupe"
+ break
+ case "pineapple":
+ print "a fruit, fused berries (syncarp)"
+ break
+ case "potato":
+ print "a vegetable, tuber"
+ break
+ default:
+ print "[unclassified]"
+ }
+}
diff --git a/eps/hpr2438/hpr2438_switch_example.out b/eps/hpr2438/hpr2438_switch_example.out
new file mode 100755
index 0000000..36d5364
--- /dev/null
+++ b/eps/hpr2438/hpr2438_switch_example.out
@@ -0,0 +1,9 @@
+The apple is classified as: a fruit, pome
+The banana is classified as: a fruit, berry
+The strawberry is classified as: not a true fruit, pseudocarp
+The grape is classified as: a fruit, berry
+The apple is classified as: a fruit, pome
+The plum is classified as: a fruit, drupe
+The kiwi is classified as: a fruit, berry
+The potato is classified as: a vegetable, tuber
+The pineapple is classified as: a fruit, fused berries (syncarp)
diff --git a/eps/hpr2443/hpr2443_example1_1.png b/eps/hpr2443/hpr2443_example1_1.png
new file mode 100755
index 0000000..5a98872
Binary files /dev/null and b/eps/hpr2443/hpr2443_example1_1.png differ
diff --git a/eps/hpr2443/hpr2443_example1_2.png b/eps/hpr2443/hpr2443_example1_2.png
new file mode 100755
index 0000000..5a4b01e
Binary files /dev/null and b/eps/hpr2443/hpr2443_example1_2.png differ
diff --git a/eps/hpr2443/hpr2443_example2_1.png b/eps/hpr2443/hpr2443_example2_1.png
new file mode 100755
index 0000000..d2d4371
Binary files /dev/null and b/eps/hpr2443/hpr2443_example2_1.png differ
diff --git a/eps/hpr2443/hpr2443_example2_2.png b/eps/hpr2443/hpr2443_example2_2.png
new file mode 100755
index 0000000..9bbdfe5
Binary files /dev/null and b/eps/hpr2443/hpr2443_example2_2.png differ
diff --git a/eps/hpr2443/hpr2443_full_shownotes.html b/eps/hpr2443/hpr2443_full_shownotes.html
new file mode 100755
index 0000000..afc4ff3
--- /dev/null
+++ b/eps/hpr2443/hpr2443_full_shownotes.html
@@ -0,0 +1,139 @@
+
+
+
+
+
+
+
+ pdmenu (HPR Show 2443)
+
+
+
+
+
+
+
+
+
pdmenu (HPR Show 2443)
+
Dave Morriss
+
+
+
+
+
+
+
Table of Contents
+
+
+
Introduction
+
Pdmenu is a tool written by Joey Hess which allows the creation of a simple menu in a terminal (console) window. It is in his list of less active projects, and the latest version is dated 2014, but it seems to be quite complete and useful as it is.
+
I like simple menus. As a Sysadmin in my last job I used one on the OpenVMS system I managed which helped me run the various periodic tasks I needed to run - especially the less frequent ones - without having to remember all of the details.
+
I do the same on my various Linux systems, and find that pdmenu is ideal for the task.
+
Installation
+
I found pdmenu in the Debian repositories (I run Debian Testing), and it was very easily installed. The C source is available as a tarfile, though I haven’t tried building it myself.
+
Running pdmenu
+
Simply typing pdmenu at a command prompt will invoke the utility. It uses the file /etc/pdmenurc as its default configuration file, and this generates a menu with a demonstration of some of its features.
+
This is not particularly useful but it can be overridden by creating your own configuration, which by default is in ~/.pdmenurc. The pdmenu command itself takes a configuration file as an argument, so there is plenty of flexibility.
+
The configuration file
+
Example
+
I use the ~/.pdmenurc file at present, so I will talk about that. This file contains definitions (commands) that look like this example from the pdmenu manual page:
+
#Set a pleasing color scheme.
+color:desktop:blue:blue
+color:title:blue:white
+color:base:blue:white
+
+#this is a comment
+menu:main:Main Menu:Things to do at foobar
+ show:_Games..::games
+ exec:_Mail::pine
+ exec:_News::slrn -C
+ exec:_WWW::lynx
+ exec:_Irc::irc
+ exec:_Directory Listing:display:ls -l
+ exec:_Who's online?:truncate:w
+ exec:_Finger:edit,truncate:finger ~finger who?:~
+ nop
+ exit:E_xit
+
+menu:games:Games:Some text-based games
+ exec:_Tetris for Terminals::/usr/games/tt
+ exec:_Adventure:pause:/usr/games/adventure
+ exec:_Zork:pause:/usr/games/zork
+ nop
+ exit:_Back to main menu..
+
+
The first block of lines use color commands to set the colours of the menu display.
+
The next block defines a menu with the menu command. The menu’s internal name is main, and its title is ‘Main Menu’. The text “Things to do at foobar” is displayed at the bottom of the screen as help text.
+
+
The first item in the menu is a link to another menu called ‘Games’ which is defined later in the file.
+
The underscore before the ‘G’ makes it a hot-key which is highlighted
+
The exec command makes a menu entry which runs a command
+
The nop command leaves a line in the menu (with optional text)
+
The exit command exits the current menu to the level above
+
+
+
+Top level menu and sub-menu from the above example
+
There is quite a lot more to be said about pdmenu but I’ll leave you to investigate further if it seems interesting to you.
+
However, I will mention the group command and how it can be used to create dynamic menus, just to give you some idea of the power and flexibility of this utility.
+
Dynamic menus
+
I am using pdmenu to help manage various administrative tasks I do for HPR. The latest menu I have built helps me intercept the notes from newly uploaded shows, which I check and edit if necessary, generate HTML if needed and then upload the result for incorporation.
+
I use a number of scripts for all of this which I will not go into here. I get alerted when a new show is in the process of being uploaded. I have a tool that checks to see if the upload has finished, and when it is complete I grab the notes and save a local copy. I then process these notes as necessary.
+
Here is the menu definition:
+
menu:showsubmission:HPR Show Submission:Deal with incoming shows
+ exec:_Show status:pause:~/HPR/Show_Submission/NS_test
+ exec:_Rsync new show notes::~/HPR/Show_Submission/sync_hpr
+ exec:_Copy notes:pause:~/HPR/Show_Submission/copy_shownotes
+ nop:--
+ group:_Process unprocessed shownotes
+ exec::makemenu:~/HPR/Show_Submission/makemenu
+ show:Process notes::process
+ remove:::process
+ endgroup
+ nop:--
+ exit:E_xit HPR Show Submission
+
The interesting bit is the group command. It invokes an exec with the makemenu flag. This takes the output of the group and makes a menu out of it. I call a script I wrote called makemenu (not very originally!) which works out which files need processing and offers a menu to do it. The menu is called process, and the show command is used to display it. Once finished the menu is deleted with the remove command.
+
I have made an example using dummy show number 2465 to demonstrate the base menu and the dynamically generated sub-menu. I’m using the same colours as the previous example.
+
+Top level menu and sub-menu from my pdmenu menu
+
Here’s what my makemenu script generates to make the sub-menu:
+
$ ./makemenu
+menu:process:Process notes for 2465:Process notes for 2465
+exec:Show _raw (2465):pause:~/HPR/Show_Submission/do_show 2465
+exec:_Parse raw (2465):pause:~/HPR/Show_Submission/do_parse 2465
+exec:_Edit notes (2465):pause:~/HPR/Show_Submission/do_vim 2465
+exec:Run _Pandoc (2465):pause:~/HPR/Show_Submission/do_pandoc 2465
+exec:Run _Midori (2465):pause:~/HPR/Show_Submission/do_midori 2465
+exec:_Upload HTML (2465):pause:~/HPR/Show_Submission/do_upload 2465
+exit:E_xit processing for 2465
+
This system is under development so may well change in the light of experience.
+
+
diff --git a/eps/hpr2448/hpr2448_check_value.sh b/eps/hpr2448/hpr2448_check_value.sh
new file mode 100755
index 0000000..a52d02e
--- /dev/null
+++ b/eps/hpr2448/hpr2448_check_value.sh
@@ -0,0 +1,40 @@
+#=== FUNCTION ================================================================
+# NAME: check_value
+# DESCRIPTION: Checks a value against a list of regular expressions
+# PARAMETERS: 1 - the value to be checked
+# 2..n - valid Bash regular expressions
+# RETURNS: 0 if the value checks, otherwise 1
+#===============================================================================
+check_value () {
+ local value="${1?Usage: check_value value list_of_regex}"
+ local matches=0
+
+ #
+ # Drop parameter 1 then there should be more
+ #
+ shift
+ if [[ $# == 0 ]]; then
+ echo "Usage: check_value value list_of_regex"
+ return 1
+ fi
+
+ #
+ # Loop through the regex args checking the value, counting matches
+ #
+ while [[ $# -ge 1 ]]
+ do
+ if [[ $value =~ $1 ]]; then
+ (( matches++ ))
+ fi
+ shift
+ done
+
+ #
+ # No matches, then the value is bad
+ #
+ if [[ $matches == 0 ]]; then
+ return 1
+ else
+ return 0
+ fi
+}
diff --git a/eps/hpr2448/hpr2448_full_shownotes.html b/eps/hpr2448/hpr2448_full_shownotes.html
new file mode 100755
index 0000000..337da40
--- /dev/null
+++ b/eps/hpr2448/hpr2448_full_shownotes.html
@@ -0,0 +1,457 @@
+
+
+
+
+
+
+
+ Useful Bash functions - part 3 (HPR Show 2448)
+
+
+
+
+
+
+
+
+
+
Useful Bash functions - part 3 (HPR Show 2448)
+
Dave Morriss
+
+
+
+
+
+
+
Table of Contents
+
+
+
Overview
+
This is the third show about Bash functions. These are a little more advanced than in the earlier shows, and I thought I’d share them in case they are useful to anyone.
+
As before it would be interesting to receive feedback on these functions and would be great if other Bash users contributed ideas of their own.
+
Example Functions
+
The read_value function
+
The purpose of this function is to output a prompt and read a string. The string is written to a nominated variable and a default string can be provided if required.
+
A typical call might be:
+
$ read_value 'What is your name? ' name
+What is your name? Herbert
+$ echo $name
+Herbert
+
Here, the first argument is the prompt and the second is the name of the variable to receive the answer.
+
If the default is used then the reply is pre-filled with it:
+
$ read_value 'Where do you live? ' country USA
+Where do you live? USA
+
This prompt can be deleted and another value used instead.
#=== FUNCTION ================================================================
+# NAME: read_value
+# DESCRIPTION: Read a value from STDIN and handle errors.
+# PARAMETERS: 1 - Prompt string for the read
+# 2 - Name of variable to receive the result
+# 3 - Default value (optional)
+# RETURNS: 1 on error, otherwise 0
+#===============================================================================
+read_value (){
+ localprompt="${1:?Usage: read_value prompt outputname [default]}"
+ localoutputname="${2:?Usage: read_value prompt outputname [default]}"
+ localdefault="${3:-}"
+ localvar
+
+ #
+ # Make an option for the 'read' if there's a default
+ #
+ if [[-n$default ]]; then
+ default="-i '$default'"
+ fi
+
+ #
+ # Read and handle CTRL-D (EOF). Use 'eval' to deal with the argument being
+ # a variable
+ #
+ eval"read -r -e $default -p '$prompt' var"
+ res="$?"
+ if [[$res-ne 0 ]]; then
+ echo"Read aborted"
+ return 1
+ fi
+
+ #
+ # Return the value in the nominated variable
+ #
+ eval"$outputname='$var'"
+ return 0
+}
+
The function is not very complex and has a number of similarities with the various iterations of the yes_no function encountered in earlier episodes. See the Links section below for links to these episodes.
+
We will not dwell too long on this function as a consequence of these similarities. However, it does something the previous functions didn’t do, it returns a string to the caller.
+
Bash functions can’t return anything but numeric values (via return), unlike higher level languages. This function writes a global variable with the value it has requested. It could have been designed to always write to the same global variable, but this is ugly. I wanted the caller to be able to nominate the variable to receive the result.
+
The main purpose of the script is to call the Bash built-in command ‘read’ with various options. If there is a default string provided we need to turn that into an option preceded by ‘-i’. This is achieved on lines 18-20 where we store the result back in the variable ‘default’.
+
The read is performed on line 26, and we use the ‘eval’ command to do this. This used because we need to make Bash scan the line twice, and the way eval works allows us to achieve this.
+
The eval command takes its arguments (in this case a string), concatenates them and executes the result in the current environment. The string it is presented with in this case is first processed by Bash where it performs the various types of expansion.
+
If we look at an example this might help to clarify the issue (the function has to be made available to Bash using the source command before this will work):
+
$ read_value 'What is your surname? ' surname 'Not provided'
+
Here, the variable default will contain ‘Not provided’ (without the quotes, which get stripped when the command is parsed by Bash), and that will be converted to “-i 'Not provided'”. The argument to eval will then be expanded to:
+
read -r -e -i 'Not provided' -p 'What is your surname? ' var
+
This command will then be executed.
+
If we had not used eval and had instead written:
+
read -r -e $default -p "$prompt" var
+
The contents of default would not have been parsed by Bash. Using eval causes two scans of the command. First time the parameters are substituted, and the second time the command is executed.
+
The following is the result of running the function on the command line with tracing on (set -x):
+
$ set -x
+
+$ read_value 'What is your surname? ' surname 'Not provided'
++ read_value 'What is your surname? ' surname 'Not provided'
++ local 'prompt=What is your surname? '
++ local outputname=surname
++ local 'default=Not provided'
++ local var
++ [[ -n Not provided ]]
++ default='-i '\''Not provided'\'''
++ eval 'read -r -e -i '\''Not provided'\'' -p '\''What is your surname? '\'' var'
+++ read -r -e -i 'Not provided' -p 'What is your surname? ' var
+What is your surname? Putin
++ res=0
++ [[ 0 -ne 0 ]]
++ eval 'surname='\''Putin'\'''
+++ surname=Putin
++ return 0
+
+$ set +x; echo "$surname"
++ set +x
+Putin
+
+
The lines that start with a $ are commands I typed, and those beginning with a + are the output of Bash’s trace mode. I added blank lines after each command (plus any output it generated).
+
+
You can see the function arguments being placed in local variables.
+
The default value is saved
+
When the eval is shown all of the variables used are expanded, then the resulting command is run
+
The prompt is shown, with my response “Putin”
+
The eval that returns the result in variable surname is shown (the variable name is in the local variable var)
+
I turn off the trace (set +x) and echo the result in this variable.
+
+
The check_value function
+
This function was designed to be used in conjunction with read_value to check that the string read in is valid.
#=== FUNCTION ================================================================
+# NAME: check_value
+# DESCRIPTION: Checks a value against a list of regular expressions
+# PARAMETERS: 1 - the value to be checked
+# 2..n - valid Bash regular expressions
+# RETURNS: 0 if the value checks, otherwise 1
+#===============================================================================
+check_value (){
+ localvalue="${1?Usage: check_value value list_of_regex}"
+ localmatches=0
+
+ #
+ # Drop parameter 1 then there should be more
+ #
+ shift
+ if [[$#== 0 ]]; then
+ echo"Usage: check_value value list_of_regex"
+ return 1
+ fi
+
+ #
+ # Loop through the regex args checking the value, counting matches
+ #
+ while [[$#-ge 1 ]]
+ do
+ if [[$value =~ $1 ]]; then
+ (( matches++ ))
+ fi
+ shift
+ done
+
+ #
+ # No matches, then the value is bad
+ #
+ if [[$matches== 0 ]]; then
+ return 1
+ else
+ return 0
+ fi
+}
+
The idea is that read_value has been used to get a string from the user of a script. This string may need to be checked to see whether it conforms to a pattern. For example, I have written a script that helps me manage the HPR shows I am in the process of writing. At some point I will have chosen a slot in the queue and want to record that the show number is hpr9876 or whatever. I might want even to perform the check against a list of possibilities. I would give the incoming string to check_value and get it to compare against a list of regular expressions.
+
+
The function takes the value to be checked as the first argument followed by one or more (Bash-style) regular expressions. Note that we use a variant of the usual parameter substitution in the local command here1.
+
After saving the first argument in the variable called value the next thing the function does (line 15) is to drop the first argument from the parameter list.
+
Then (lines 16-19) it checks the $# variable (number of arguments) to see if there are any regular expressions. If not it prints an error message and exits with a ‘false’ value.
+
A while loop (lines 24-30) then processes each argument until there are no more. Each time it compares the argument (assuming it’s a regular expression) against the variable called value, incrementing the matches variable if they match. After the test shift is used to drop that argument.
+
By line 35 the matches variable should be zero if nothing matched or non-zero if there were any matches. The function returns ‘false’ in the first instance and ‘true’ otherwise.
+
+
This might be tested as follows:
+
$ source read_value.sh
+$ source check_value.sh
+$ demo () {
+ name=
+ until read_value "What is your first name? " name && test -n "$name"; do
+ :
+ done
+
+ if check_value "$name" "^[A-Za-z]+$" "^0[Xx][A-Fa-f0-9]+$"; then
+ echo "Hello $name"
+ else
+ echo "That name isn't valid"
+ fi
+}
+$ demo
+What is your first name? Herbert
+Hello Herbert
+$ demo
+What is your first name? 0x1101
+Hello 0x1101
+$ demo
+What is your first name? Jim42
+That name isn't valid
+$ demo
+What is your first name? 0xDEAD42
+Hello 0xDEAD42
+$ demo
+What is your first name? DEAD42
+That name isn't valid
+
+
This defines a temporary function called demo
+
The variable name is created and made empty
+
The function read_value is called in an until loop where it requests the caller’s first name and writes the response to name. It is concatenated2 with a call to the built-in test command using the -n option against the contents of the name variable (substituted into a string). This returns ‘true’ if the string is not empty, in which case the loop stops. If no name is given the loop will repeat. The loop body consists of a null command (colon).
+
A call to check_value follows in an if command. If the check returns ‘true’ then the name variable is echoed, otherwise an error message is given.
+
The call to check_value uses name as the string to check and it matches against two regular expressions. The first checks for at least one upper or lower case alphabetic character, and nothing else (no spaces in the name allowed). The second expects a hexadecimal number which begins with ‘0X’ or ‘0x’ followed by at least one hexadecimal digit.
+
+
The read_and_check function
+
This function uses the two previous functions to read a value and check it is valid. It loops until a valid reply is received or the process is aborted with CTRL-D.
#=== FUNCTION ================================================================
+# NAME: read_and_check
+# DESCRIPTION: Reads a value (see read_value) and checks it (see check_value)
+# against an arbitrary long list of Bash regular expressions
+# PARAMETERS: 1 - Prompt string for the read
+# 2 - Name of variable to receive the result
+# 3 - Default value (optional)
+# 4..n - Valid regular expressions
+# RETURNS: Nothing
+#===============================================================================
+read_and_check (){
+ localprompt="${1:?Usage: read_and_check prompt outputname [default] list_of_regex}"
+ localoutputname="${2:?Usage: read_and_check prompt outputname [default] list_of_regex}"
+ localdefault="${3:-}"
+
+ if ! read_value"$prompt""$outputname""$default";then
+ return 1
+ fi
+ shift 3
+ untilcheck_value"${!outputname}""$@"
+ do
+ echo"Invalid input: ${!outputname}"
+ if ! read_value"$prompt""$outputname""$default";then
+ return 1
+ fi
+ done
+
+ return 0
+}
+
+
The function expects a prompt (for read_value), the name of a variable to receive the result, an optional default value and a list of regular expressions.
+
The first thing the function does is to call read_value (line 16) with the relevant parameters. It does this in an if command because if the prompt is aborted with CTRL-D then a ‘false’ value is returned, so we abort read_and_check if so.
+
Next, on line 19, the shift command deletes the first three arguments so we can pass the remainder to check_value.
+
Then check_value is called in an until loop (lines 20-26) checking the value returned against the list of regular expressions. If the check is passed the loop ends and the function exits. If not then an error message is written and read_value called again (with a check for CTRL-D as before).
+
A complicating factor in this function is that the local variable outputname contains the name of the (global) variable to receive the value from the user. When we want to examine the contents of this variable we have to use indirect expansion of the form ${!outputname}. This means examine the contents of outputname and use what is there as the name of a variable whose value we require. We need to do this when handing the value returned by read_value to check_value, and when reporting that the value is invalid.
As before the functions are sourced to ensure Bash knows about them. You only need to do this once for each one if testing them. If using these functions in a script you’d probably just copy all three into the script, which would achieve the same.
+
Then read_and_check is called to collect an HPR slot number (I use this in my episode preparation toolkit). The variable slot is to hold the result, and there is no default.
+
The regular expression accepts lower-case ‘hpr’ followed by a number, or nothing at all.
+
The first input is rejected because it uses upper-case letters, and the second null input is accepted with no problem.
+
The echo shows that variable slot is empty.
+
+
Conclusion
+
These three functions are adequate for my use; I use them in a number of scripts I have written. They are not 100% bullet-proof. For example, if a regular expression is mistyped things could fail messily.
+
The business of passing information back from a function is messy, though it works. It can be streamlined by the use of nameref variables, which we will look at another time.
+
Please comment or email me with any improvements or changes you think would make these functions better.
+
Links
+
+
Previous HPR episodes in this group Useful Bash functions:
+
local value="${1?Usage: check_value value list_of_regex}"
+
is used in this function. The parameter substitution expression ${parameter:?word} means that the value of word is used as an error message and the script exits if the parameter is null or unset. Since we might want to use a parameter which is null we use the variant ${parameter?word} which merely checks for existence.↩
+
Calling it concatenation is not strictly true. This is what is known as an AND list in Bash. Its generic form is: command1 && command2, and command2 is executed if and only if command1 returns an exit status of zero.↩
+
+
+
+
+
+
+
diff --git a/eps/hpr2448/hpr2448_read_and_check.sh b/eps/hpr2448/hpr2448_read_and_check.sh
new file mode 100755
index 0000000..ecbbca1
--- /dev/null
+++ b/eps/hpr2448/hpr2448_read_and_check.sh
@@ -0,0 +1,29 @@
+#=== FUNCTION ================================================================
+# NAME: read_and_check
+# DESCRIPTION: Reads a value (see read_value) and checks it (see check_value)
+# against an arbitrary long list of Bash regular expressions
+# PARAMETERS: 1 - Prompt string for the read
+# 2 - Name of variable to receive the result
+# 3 - Default value (optional)
+# 4..n - Valid regular expressions
+# RETURNS: Nothing
+#===============================================================================
+read_and_check () {
+ local prompt="${1:?Usage: read_and_check prompt outputname [default] list_of_regex}"
+ local outputname="${2:?Usage: read_and_check prompt outputname [default] list_of_regex}"
+ local default="${3:-}"
+
+ if ! read_value "$prompt" "$outputname" "$default"; then
+ return 1
+ fi
+ shift 3
+ until check_value "${!outputname}" "$@"
+ do
+ echo "Invalid input: ${!outputname}"
+ if ! read_value "$prompt" "$outputname" "$default"; then
+ return 1
+ fi
+ done
+
+ return 0
+}
diff --git a/eps/hpr2448/hpr2448_read_value.sh b/eps/hpr2448/hpr2448_read_value.sh
new file mode 100755
index 0000000..53da1eb
--- /dev/null
+++ b/eps/hpr2448/hpr2448_read_value.sh
@@ -0,0 +1,38 @@
+#=== FUNCTION ================================================================
+# NAME: read_value
+# DESCRIPTION: Read a value from STDIN and handle errors.
+# PARAMETERS: 1 - Prompt string for the read
+# 2 - Name of variable to receive the result
+# 3 - Default value (optional)
+# RETURNS: 1 on error, otherwise 0
+#===============================================================================
+read_value () {
+ local prompt="${1:?Usage: read_value prompt outputname [default]}"
+ local outputname="${2:?Usage: read_value prompt outputname [default]}"
+ local default="${3:-}"
+ local var
+
+ #
+ # Make an option for the 'read' if there's a default
+ #
+ if [[ -n $default ]]; then
+ default="-i '$default'"
+ fi
+
+ #
+ # Read and handle CTRL-D (EOF). Use 'eval' to deal with the argument being
+ # a variable
+ #
+ eval "read -r -e $default -p '$prompt' var"
+ res="$?"
+ if [[ $res -ne 0 ]]; then
+ echo "Read aborted"
+ return 1
+ fi
+
+ #
+ # Return the value in the nominated variable
+ #
+ eval "$outputname='$var'"
+ return 0
+}
diff --git a/eps/hpr2448/hpr2448_trace_of_read_value.txt b/eps/hpr2448/hpr2448_trace_of_read_value.txt
new file mode 100755
index 0000000..63bf8a8
--- /dev/null
+++ b/eps/hpr2448/hpr2448_trace_of_read_value.txt
@@ -0,0 +1,22 @@
+$ set -x
+
+$ read_value 'What is your surname? ' surname 'Not provided'
++ read_value 'What is your surname? ' surname 'Not provided'
++ local 'prompt=What is your surname? '
++ local outputname=surname
++ local 'default=Not provided'
++ local var
++ [[ -n Not provided ]]
++ default='-i '\''Not provided'\'''
++ eval 'read -r -e -i '\''Not provided'\'' -p '\''What is your surname? '\'' var'
+++ read -r -e -i 'Not provided' -p 'What is your surname? ' var
+What is your surname? Putin
++ res=0
++ [[ 0 -ne 0 ]]
++ eval 'surname='\''Putin'\'''
+++ surname=Putin
++ return 0
+
+$ set +x; echo "$surname"
++ set +x
+Putin
diff --git a/eps/hpr2453/hpr2453_full_shownotes.html b/eps/hpr2453/hpr2453_full_shownotes.html
new file mode 100755
index 0000000..84c0977
--- /dev/null
+++ b/eps/hpr2453/hpr2453_full_shownotes.html
@@ -0,0 +1,131 @@
+
+
+
+
+
+
+
+ The power of GNU Readline - part 2 (HPR Show 2453)
+
+
+
+
+
+
+
+
+
The power of GNU Readline - part 2 (HPR Show 2453)
+
Dave Morriss
+
+
+
+
+
+
+
Table of Contents
+
+
+
Recap of Keys and Notation
+
(Feel free to skip this if you are up to speed with the keys and notation we used before.)
+
+
In the last episode we saw that most of the features in GNU Readline are invoked by multi-key sequences. These involve the Control key and the so-called Meta key. The Control key is usually marked Ctrl on the keyboard. The Meta key is the key marked Alt.
+
The notation used in the GNU Readline manual is C-k for ‘Control-k’, meaning the character produced when the k key is pressed while the Control key is being held down.
+
For the Meta key the notation M-k (Meta-k) means the character produced when the k key is pressed while the Meta key is being held down.
+
If your keyboard does not have a Meta key then the same result can be obtained for M-k by pressing the Esc key, releasing it, then pressing the k key.
+
In some instances both the Control and the Meta key might be used, so M-C-k would mean the character produced when the k key is pressed while the Meta and Control keys are being held down.
+
+
Note that in the last episode we looked at C-d as a way of deleting the character under the cursor (the same as the Del key, at least in my case). This key sequence has another meaning which we didn’t cover. If the input line is empty and the cursor is on the beginning of the line, C-d sends an ‘end of file signal’. This can stop a script or program waiting for input or kill the terminal emulator amongst other effects.
+
Key sequences and the desktop
+
Depending on which desktop you use you might find that some of the key sequences used by GNU Readline do not work the way they are documented.
+
One of the areas of confusion is with the Backspace and Delete keys. In my experience of using various flavours of Unix over the years (SunOS, Solaris, DomainOS, HP-UX, Ultrix, OSF/1, TRU64 Unix), the behaviour of these keys was the cause of much confusion.
+
As explained in the initial part of this article the original was in the context of paper tape. The Backspace key would move the tape backwards one place and the Delete key would then overpunch the position with all 1’s, a bit like the way typists used to cancel out an individual character on a typewriter.
+
The behaviour of using the Backspace key for deleting characters backward, and the Delete key for deleting the character under the cursor did not really settle down until the late 1990’s.
+
Going in for the kill
+
The term kill is used in the GNU manual to mean deleting text while saving it away for later. There you will also find the term yank meaning to re-insert deleted text back again. This is a bit confusing (not consistent with sed or vim for example) so I will not be using these terms (though I’ll refer to them in the notes for completeness).
+
As they point out, the more modern terminology for these actions is cut and paste.
+
Deleted (cut or killed) text is stored in a place called the kill-ring and can be restored. Consecutive kills cause the text to be accumulated into one unit which can be yanked (pasted) all at once. Commands which do not kill text separate the chunks of text on the kill ring.
+
+
C-k (Control-k)
+
Delete (Kill) the text from the current cursor position to the end of the line. Deletes everything to the right.
+
+
M-d (Meta-d)
+
Delete (Kill) forward from the cursor to the end of the current word, or, if between words, to the end of the next word. Word boundaries are the same as those used by M-f (move forward a word).
+The space after the word is not deleted and the space before it is only deleted if the cursor is there.
+
+
M-DEL (M-Backspace) (Meta-DEL or Meta-Backspace)
+
Delete (Kill) backward from the cursor to the start of the current word, or, if between words, to the start of the previous word. Word boundaries are the same as those used by M-b (move backward a word).
+
+Note: I find that this functionality is available as M-Backspace on my workstation, not as M-DEL.
+
+This feature is very useful for deleting a filename component for example. We’ll look at this in the Examples section below.
+
+
C-w (Control-w)
+
Delete (Kill) backwards from the cursor to the previous whitespace. This is different from M-DEL because the word boundaries differ.
+
+
C-y (Control-y)
+
Paste (Yank) the most recently killed text back into the buffer at the cursor.
+
+
M-y (Meta-y)
+
Rotate the kill-ring, and paste (yank) the new top. You can only do this if the prior command is C-y or M-y.
+
+
+
Examples
+
Example 1
+
Type the following on the command line and position the cursor to the ‘m’ of miles (Hint: you can use the M-b command repeatedly for this). The circumflex (‘^’) below the line shows the cursor position:
+
$ echo How many miles to Babylon
+ ^
+
Press C-k, that is, hold the Control key and press k. The text from the cursor to the end of the line is deleted. Move the cursor to the start of the string over the ‘H’ of How (you could press M-b twice):
+
$ echo How many
+ ^
+
Press C-y to paste (yank) back the text we deleted (killed):
+
$ echo miles to BabylonHow many
+miles to BabylonHow many
+
Not particularly useful, but you get the idea.
+
Example 2
+
As root you want to check various log files. First the mosquitto log:
+
$ tail /var/log/mosquitto/mosquitto.log
+
Of course, you will have created this line in the first place by typing:
+
$ tail /var/log/mos
+
then pressing the Tab key to get:
+
$ tail /var/log/mosquitto/
+
Then pressing Tab again fills in the rest (assuming your /var/log/mosquitto/ directory only contains files starting with mosquitto.log).
+
Now you might want to check the system log in case it holds any clues to the problem you’re investigating, so you recall the last line:
+
$ tail /var/log/mosquitto/mosquitto.log
+
You press M-Backspace three times to delete the last three elements and get:
+
$ tail /var/log/
+
You can then type syslog to get the command:
+
$ tail /var/log/syslog
+
As one last demonstration: if you were to remove the syslog you just typed using M-Backspace you would be able to restore it with C-y, then if you typed M-y you’d see syslog replaced by mosquitto/mosquitto.log.
+
This is because the ‘kill ring’ contained the syslog text after it had been deleted, but it also contained the earlier deletion. After typing Ctrl-y to restore the last deletion the key sequence M-y rotated the ring and restored the original deletion. You can repeat M-y to repeat this process with the kill ring.
+
Hopefully you can see the power of GNU Readline to do some useful stuff when creating and editing a command.
GNU Readline manual (Note that the widely advertised address http://cnswww.cns.cwru.edu/php/chet/readline/rltop.html seems not to work any more. This one, which I found through the main GNU site, seems OK though)
My daughter flew out to New Zealand before Christmas 2017 to spend some time with her brother, who had been there with his girlfriend since November. I saw her flight itinerary from the airline, but had no idea of how the times related to time back home, so I wrote a little Bash script to calculate times in UTC (my local timezone).
+
Both of my children have travelled a fair bit in the past few years. I like to keep track of where they are and how they are progressing through their journeys because otherwise I tend to worry. This one was a reasonably simple journey, two flights via Doha in Qatar, with not too long a wait between them. The overall journey was long of course.
+
When my daughter flew out to Indonesia in 2015 (4 flights and a boat trip, over 38 hours travel time) I built a spreadsheet. Just whatever provides a good distraction!
+
Script
+
Algorithm
+
I had the start and arrival times of each flight as well as the flight durations. I also had the connection time. I decided to use the date command to perform date and time calculations.
+
The date command can take a date specification with an offset. So, for example adding 1 week to the current time can be computed as:
+
$ date -d 'today + 1 week'
+Sat 6 Jan 11:13:12 GMT 2018
+
(it was 11:13 on Saturday 30th December when I ran that command)
+
However, giving date a start date and time leads to some odd results:
+
date -d '2017-12-24 14:55 + 415 minutes'
+Sun 24 Dec 10:41:00 GMT 2017
+
Adding 415 minutes (6h 55min) to 14:55 should result in 21:50.
+
I think this problem is caused by the date and time being separated by a space, so I tried another approach:
+
$ date -d "$(date -Iminutes -d '2017-12-24 14:55') + 415 minutes"
+Sun 24 Dec 21:50:00 GMT 2017
+
This uses the fact that the date command can output and interpret ISO 8601 dates, so the inner date in the command substitution produces:
+
$ date -Iminutes -d '2017-12-24 14:55'
+2017-12-24T14:55+00:00
+
Now, adding 415 minutes to the ISO date and time is no problem for date1.
+
I also discovered that I didn’t have to convert my time offset to minutes as in the examples above, so the following also works:
+
$ date -d '2017-12-24T14:55+00:00 + 6 hours + 55 minutes'
+Sun 24 Dec 21:50:00 GMT 2017
+
Not surprisingly, the offset in the form of ‘6:55’ didn’t work.
+
Code
+
Here is the script itself, which may be downloaded from the HPR website as edi_akl. It is not generic, more of a throwaway script, but it does demonstrate some principles:
I created a function called minutes which takes a time of the form ‘6:55’ as its first argument and turns it into minutes, returning the result to the caller. The second argument is the name of the variable to receive the result. The method used is a ‘nameref’ which I’ll explain more fully in an upcoming show.
+
Rather than converting to minutes I could have split the time into components and embedded them in an expression like ‘6 hours + 55 minutes’ as we saw earlier. I decided to do it using minutes.
+
I computed departure and arrival times based on the ISO 8601 format and saved them in variables to make computation easier. I then printed the values, reformatting these dates into something more readable. As you can see from the output below '+%F %R %z' generates a date in the format YYYY-MM-DD and a 24-hour 'HH:MM' time followed by the (local) timezone.
+
All times are in my local time (the default), but at the end I worked out the arrival time in Auckland using a timezone specification, as you can see in lines 48 and 49, and finished off by adding all the durations to generate a total journey time in hours and minutes. Here is what the output looks like:
+
Outward flights:
+Leave Edinburgh at 2017-12-24 14:55 +0000
+Arrive Doha at 2017-12-24 21:50 +0000
+
+Leave Doha at 2017-12-24 23:30 +0000
+Arrive Auckland at 2017-12-25 15:40 +0000
+
+New Zealand time 2017-12-26 04:40 +1300
+Duration 24hr 45min
+
Conclusion
+
Writing a Bash script provided the necessary catharsis and distraction, as did watching flight progress on https://www.flightradar24.com/. I also learnt some stuff about the date command I didn’t know. My wandering children head back in mid-January, and I have the script for the return flight already written!
+
Links
+
+
GNU documentation for date (You can also use man date or info date for the full details. I prefer the HTML version because I don’t like the info tool very much).
This is the fourth show about the Bash functions I use, and it may be the last unless I come up with something else that I think might be of general interest.
+
There is only one function to look at this time, but it’s fairly complex so needs an entire episode devoted to it.
+
As before it would be interesting to receive feedback on this function and would be great if other Bash users contributed ideas of their own.
+
The range_parse function
+
The purpose of this function is to read a string containing a range or ranges of numbers and turn it into the actual numbers intended. For example, a range like 1-3 means the numbers 1, 2 and 3.
+
I use this a lot. It’s really helpful when writing a script to select from a list. The script can show the list with a number against each item, then ask the script user to select which items they want to be deleted, or moved or whatever.
+
For example, I manage the podcasts I am listening to this way. I usually have two or three players with playlists on them. When the battery on one needs charging I can pick up another and continue listening to whatever is on there. I have a script that knows which playlists are on which player, and it asks me which episode I am listening to by listing all the playlists. I answer with a range. Another script then asks which of the episodes that I was listening to have finished. It then deletes the episodes I have heard.
+
Parsing a collection of ranges then is not particularly difficult, even in Bash, though dealing with some of the potential problems complicates matters a bit.
+
The function range_parse takes three arguments:
+
+
The maximum value allowed in the range (the minimum is fixed at 1)
The function has dealt with the repetition of 7 and the fact that the 3 is already in the range 1-4 and has sorted the result as a string that can be placed in an array or used in a for loop.
+
Algorithm
+
The method used for processing the range presented to the function is fairly simple:
+
+
The range string is stripped of spaces
+
It is checked to ensure that the characters it contains are digits, commas and hyphens. If not then the function ends with an error
+
The comma-separated elements are selected one by one
+
+
Elements consisting of groups of digits (i.e. numbers) are stored away for later
+
If the element contains a hyphen then it is checked to ensure it consists of two groups of digits separated by the hyphen, and it is split up and the range of numbers between its start and end is determined
+
The results of the step-by-step checking of elements is accumulated for the next stage
+
+
The accumulated elements are checked to ensure they are each in range. Any that are not are rejected and an error message produced showing what was rejected.
+
Finally all of the acceptable items are sorted and any duplicates removed and returned as a list in a string. If any errors occurred in the analysis of the range the function returns a ‘false’ value to the caller, otherwise ‘true’ is returned. This allows it to be used where a true/false value is expected, such as in an if statement, if desired.
+
+
Analysis of function
+
Here is the function itself, which may be downloaded from the HPR website as range_parse.sh:
#=== FUNCTION ================================================================
+# NAME: range_parse
+# DESCRIPTION: Parse a comma-separated list of numbers and "number-number"
+# ranges such as '1,3,5-7,9'
+# PARAMETERS: 1 - maximum limit of the range
+# 2 - entered range expression (e.g. 1-3,7,14)
+# 3 - name of the variable to receive the result
+# RETURNS: Writes a list of values to the nominated variable and returns
+# 0 (true) if the range parsed, and 1 (false) if not
+#===============================================================================
+function range_parse{
+ localmax=${1?range_parse: arg1 missing}
+ localrange=${2?range_parse: arg2 missing}
+ local -n result=${3?range_parse: arg3 missing}
+
+ localitemselectionselerrmsgexitcode=0
+
+ #
+ # Remove spaces from the range
+ #
+ range=${range///}
+
+ #
+ # Check for invalid characters
+ #
+ if [[$range =~ [^0-9,-] ]]; then
+ echo"Invalid range: $range"
+ return 1
+ fi
+
+ #
+ # Slice up the sub-ranges separated by commas and turn all n-m expressions
+ # into the intermediate values. Trim the trailing space from the
+ # concatenation.
+ #
+ until [[-z$range ]]; do
+ #
+ # Get a comma-separated item
+ #
+ if [[$range =~ [,] ]]; then
+ item=${range%%,*}
+ range=${range#*,}
+ else
+ item=$range
+ range=
+ fi
+
+ #
+ # Look for a 'number-number' expression
+ #
+ if [[$item =~ [-] ]]; then
+ if [[$item =~ ^([0-9]{1,})-([0-9]{1,})$ ]]; then
+ item=$(eval"echo {${item/-/..}}")
+ else
+ echo"Invalid sequence: ${item}"
+ item=
+ exitcode=1
+ fi
+ fi
+ selection+="$item "
+ done
+
+ #
+ # Check for out of bounds problems, sort the values and and make unique
+ #
+ if [[-n$selection ]]; then
+
+ #
+ # Validate the resulting range
+ #
+ fori in $selection;do
+ if [[$i-lt 1 || $i-gt$max ]]; then
+ err+="$i "
+ else
+ sel+="$i "
+ fi
+ done
+
+ #
+ # Report any out of range errors
+ #
+ if [[${err+"${err}"} ]]; then
+ msg="$(fori in ${err};doecho"$i";done|sort -un)"
+ msg="${msg//$'\n'/}"
+ printf"Value(s) out of range: %s\n""${msg}"
+ exitcode=1
+ fi
+
+ #
+ # Rebuild the selection after having removed errors
+ #
+ selection=
+ if [[${sel+"${sel}"} ]]; then
+ selection="$(fori in ${sel};doecho"$i";done|sort -un)"
+ selection="${selection//$'\n'/}"
+ fi
+ fi
+
+ #
+ # Return the result
+ #
+ result="$selection"
+
+ return$exitcode
+}
+
+
+
Line 11: There are two ways of declaring a function in Bash. The function name may be followed by a pair of parentheses and then the body of the function (usually enclosed in curly braces). Alternatively the word function is followed by the function name, optional parentheses and the function body. There is no significant difference between the two methods.
+
Lines 12 and 13: The first two arguments for the function are stored in local variables max (the maximum permitted number in the range) and range (the string holding the range expression to parse). In both cases we use the parameter expansion feature which halts the script with an error message if these arguments are not supplied.
+
Line 14: Here local -n is used for the local variable result which is to hold the name of a variable external to the function which will receive the result of parsing the expression. Using the -n option makes it a nameref; a reference to another variable. The definition in the Bash manual is as follows:
+
+
+
Whenever the nameref variable is referenced, assigned to, unset, or has its attributes modified (other than using or changing the nameref attribute itself), the operation is actually performed on the variable specified by the nameref variable’s value. A nameref is commonly used within shell functions to refer to a variable whose name is passed as an argument to the function.
+
+There is more to talk about with nameref variables, but we will leave that for another time.
+
+
+
Line 16: Some other variables local to the function are declared here, and one (exitcode) is given an initial value.
+
Line 21: Here all spaces are being removed from the range list in variable range.
+
Lines 26 to 29: In this test the range variable is being checked against a regular expression consisting only of the digits 0-9, a comma and a hyphen. These are the only characters allowed in the range list. If the match fails an error message is written and the function returns with a ‘false’ value.
+
Lines 36-61: This is the loop which chops up the range list into its component parts. Each time it iterates a comma-separated element is removed from the range variable, which grows shorter, and the test:
+
until [[ -z $range ]]
+will become true when nothing is left.
+
+
Lines 40-46: This if statement looks to see if the range variable contains a comma, using a regular expression.
+
+
If it does a variable called item is filled with the characters of range up to the first comma. Then range is set to its previous contents without the part up to the first comma.
+
If there was no comma then item is set to the entirety of range and range is emptied. This is because this must be the last (or only) element.
+
+
Lines 51-59: At this point the element in item is either a plain number or a range expression of the form ‘number-number’. This pair of nested if statements determine if it is the latter and attempt to expand the range. The outer if tests item against a regular expression consisting of a hyphen, and if the result is true the inner if is invoked1.
+
+
Line 52: compares the contents of item against a more complex regular expression. This one looks for one or more digits, a hyphen, and one or more digits.
+
+
If found then item is edited to replace the hyphen by a pair of dots. This is inside braces as the argument to an echo statement. So, given 1-5 in item the echo will be given {1..5}, a brace expansion expression. The echo is the command of an eval statement (needed to actually execute the expansion), and this is inside a command expansion. The result should be that item is filled with the numbers from the expansion so 1-5 becomes ‘1 2 3 4 5’!
+
If the regular expression does not match then this is not a valid range, so this is reported in the else branch and item is cleared of its contents. Also, since we want this error reported to the caller we set exitcode to 1 for later use.
+
+
Line 60: Here a variable called selection is being used to accumulate the successive contents of item on each iteration. We use the += form of assignment to make it easier to do this accumulation. Notice that a trailing space is added to ensure none of the numbers collide with one another in the string.
+
+
+
+
+
+
Lines 66-97: This is an if statement which tests to see if the variable selection contains anything. If it does then the contents are validated.
+
+
Lines 71-77: This is a loop which cycles through the numbers in the variable. It is a feature of this form of the for loop that it operates on a list of space-separated items, and that’s what selection contains.
+
+
Lines 72-76: This if statement checks each number to ensure that it is in range between 1 and the value in the variable max.
+
+
If it is not in range then the number is appended to the variable err
+
If it is in range it is appended to the variable sel
+
+
+
Lines 82-87: This if statement tests to determine whether there is anything in the err variable. If it contains anything then there have been one or more errors, so we want to report this. The test used here seems very strange. The reason for it is discussed below in the Explanations section, explanation 1.
+
+
Line 83: The variable msg is filled with the list of errors. This is done with a command substitution expression where a for loop is used to list the numbers in err using an echo command and these are piped to the sort command. The sort command makes what it receives unique and sorts the lines numerically. This rather involved pipeline is needed because sort requires a series of lines, and these are provided by the echo. This deals with the possible duplication of the errors and the fact that they are not necessarily in any particular order.
+
Line 84: Because the process of sorting the erroneous numbers and making them unique has added newlines to them all we use this statement to remove them. This is an example of parameter expansion, and in this one the entire string is scanned for a pattern and each one is replaced by a space. There is a problem with replacing newlines in a string however, since there is no simple way to represent them. Here we use $'\n' to do this. See the Explanations section below (explanation 2) for further details.
+
Line 85 and 86: The string of erroneous number is printed here and exitcode is set to 1 so the function can flag that there has been an error when it exits. It doesn’t exit though since some uses will simply ignore the returned value and carry on regardless.
+
+
Lines 92-96: At this point we have extracted all the valid numbers and stored them in sel and we want to sort them and make them unique as we did with err before returning the result to the caller. We start by emptying the variable selection in anticipation.
+
+
Line 93: This if statement checks that the sel variable actually contains anything. This test uses the unusual construct ${sel+"${sel}"}, which was explained for an earlier test. (See explanation 1 in the Explanations section below).
+
Line 94 and 95: These rebuild selection by extracting the numbers from sel, sorting them and making them unique, and then removing the newlines this process has added. See the notes for lines 82-87 above and explanation 2 below.
+
+
+
Line 102: Here the variable result is set to the contents of selection. Now, since result is a nameref variable containing the name of a variable passed in when the range_parse function was called it is that variable that receives the result.
+
Line 104: Here the function returns to the caller. The value returned is whatever is in exitcode. By default this is zero, but if any sort of error has occurred it will have been set to 1, as discussed earlier.
+
+
Explanations
+
+
The expression ${err+"${err}"} (see Lines 82-87 above), also ${sel+"${sel}"} (see Line 93 above): As far as I can determine this strange expression is needed because of a bug in the version of Bash I am running.
+
+In all of my scripts I include the line set -o nounset (set +u is equivalent) which has the result of treating the use of unset variables in parameter expansion as a fatal error. The trouble is that either err and sel might be unset in this function in some circumstances. This will result in the function stopping with an error. It should be possible to test a variable to see whether it is unset without the function crashing!
+
+This expression is a case of a parameter expansion of the ${parameter:+word} type, but without the colon. It returns a null string if the parameter is unset or null or the contents if it has any - and it does so without triggering the unset variable alarm.
+
+I don’t like resorting to “magic” solutions like this but it seems to be a viable way of avoiding this issue.
+
The expression $'\n' (see Line 84 above): This is an example of ANSI-C quoting. See the GNU Bash Reference Manual in the ANSI-C Quoting section for the full details.
+
+The construct must be written as $'string' which is expanded to whatever characters are in the string with certain backslash sequences being replaced according to the ANSI-C standard. This allows characters such as newline (\n) and carriage return (\r) as well as Unicode characters to be easily inserted. For example echo $'\U2192' produces → (in a browser and in many terminals).
+
+
Possible improvements
+
This function has been around the block for quite a few years. I wrote it originally for a script I developed at work in the 2000’s and have been refining and using it in many other projects since. Preparing it for this episode has resulted in some further refinements!
+
+
The initial space removal means that '7,1-5' and '7 , 1 - 5 ' are identical as far as the algorithm is concerned. It also means that '4 2', which might have been written that way because a comma was omitted, is treated as '42' which might be a problem.
+
The command substitutions which sort lists of numbers and make them unique have to make use of the sort command. Ideally I’d like to avoid using external programs in my Bash scripts, but trying to do this type of thing in Bash where sort does a fine job seems a little extreme!
+
The reporting of all of the numbers which are out of range could lead to a slightly bizarre error report if called with arguments such as 20 '5-200' (where the second zero was added in error). Everything from 21-200 will be reported as an error! The function could be cleverer in this regard.
The range_parse function does not care what order the numbers and ranges are organised in the comma-separated list. It does not care about range overlaps either, nor does it care about empty items in the list. It flags items which are out of range but still prepares a final list.
+
A simple demo script
+
The simple script called range_demo.sh, which may be downloaded from the HPR website is as follows:
+
#!/bin/bash -
+
+#
+# Test script to run the range_parse function
+#
+
+set-o nounset # Treat unset variables as an error
+
+#
+# Source the function. In a real script you'd want to provide a path and check
+# the file is actually there.
+#
+source range_parse.sh
+
+#
+# Call range_parse with the first two arguments provided to this script. Save
+# the output in the variable 'parsed'. The function is called in an 'if'
+# statement such that it takes different action depending on whether the
+# parsing was successful or not.
+#
+ifrange_parse"$1""$2" parsed;then
+ echo"Success"
+ echo"Parsed list: ${parsed}"
+else
+ echo"Failure"
+fi
+
+exit
Why do it this way? I did a double-take while preparing these notes wondering why I had organised the logic here in this way.
+
+The first part of the loop is concerned with getting the next item from a comma-separated list. At that point the contents of $item is either a bare number or a 'number-number' range. The differentiator between the two is a hyphen, so checking for that character allows the complex regular expression on line 52 to be omitted if it is not there.
+
+If you can think of a better way of doing this please let me know in the comments or by email.↩
+
+
+
+
+
+
+
diff --git a/eps/hpr2483/hpr2483_range_demo.sh b/eps/hpr2483/hpr2483_range_demo.sh
new file mode 100755
index 0000000..aeb7af9
--- /dev/null
+++ b/eps/hpr2483/hpr2483_range_demo.sh
@@ -0,0 +1,28 @@
+#!/bin/bash -
+
+#
+# Test script to run the range_parse function
+#
+
+set -o nounset # Treat unset variables as an error
+
+#
+# Source the function. In a real script you'd want to provide a path and check
+# the file is actually there.
+#
+source range_parse.sh
+
+#
+# Call range_parse with the first two arguments provided to this script. Save
+# the output in the variable 'parsed'. The function is called in an 'if'
+# statement such that it takes different action depending on whether the
+# parsing was successful or not.
+#
+if range_parse "$1" "$2" parsed; then
+ echo "Success"
+ echo "Parsed list: ${parsed}"
+else
+ echo "Failure"
+fi
+
+exit
diff --git a/eps/hpr2483/hpr2483_range_parse.sh b/eps/hpr2483/hpr2483_range_parse.sh
new file mode 100755
index 0000000..e73fd1c
--- /dev/null
+++ b/eps/hpr2483/hpr2483_range_parse.sh
@@ -0,0 +1,106 @@
+#=== FUNCTION ================================================================
+# NAME: range_parse
+# DESCRIPTION: Parse a comma-separated list of numbers and "number-number"
+# ranges such as '1,3,5-7,9'
+# PARAMETERS: 1 - maximum limit of the range
+# 2 - entered range expression (e.g. 1-3,7,14)
+# 3 - name of the variable to receive the result
+# RETURNS: Writes a list of values to the nominated variable and returns
+# 0 (true) if the range parsed, and 1 (false) if not
+#===============================================================================
+function range_parse {
+ local max=${1?range_parse: arg1 missing}
+ local range=${2?range_parse: arg2 missing}
+ local -n result=${3?range_parse: arg3 missing}
+
+ local item selection sel err msg exitcode=0
+
+ #
+ # Remove spaces from the range
+ #
+ range=${range// /}
+
+ #
+ # Check for invalid characters
+ #
+ if [[ $range =~ [^0-9,-] ]]; then
+ echo "Invalid range: $range"
+ return 1
+ fi
+
+ #
+ # Slice up the sub-ranges separated by commas and turn all n-m expressions
+ # into the intermediate values. Trim the trailing space from the
+ # concatenation.
+ #
+ until [[ -z $range ]]; do
+ #
+ # Get a comma-separated item
+ #
+ if [[ $range =~ [,] ]]; then
+ item=${range%%,*}
+ range=${range#*,}
+ else
+ item=$range
+ range=
+ fi
+
+ #
+ # Look for a 'number-number' expression
+ #
+ if [[ $item =~ [-] ]]; then
+ if [[ $item =~ ^([0-9]{1,})-([0-9]{1,})$ ]]; then
+ item=$(eval "echo {${item/-/..}}")
+ else
+ echo "Invalid sequence: ${item}"
+ item=
+ exitcode=1
+ fi
+ fi
+ selection+="$item "
+ done
+
+ #
+ # Check for out of bounds problems, sort the values and and make unique
+ #
+ if [[ -n $selection ]]; then
+
+ #
+ # Validate the resulting range
+ #
+ for i in $selection; do
+ if [[ $i -lt 1 || $i -gt $max ]]; then
+ err+="$i "
+ else
+ sel+="$i "
+ fi
+ done
+
+ #
+ # Report any out of range errors
+ #
+ if [[ ${err+"${err}"} ]]; then
+ msg="$(for i in ${err}; do echo "$i"; done | sort -un)"
+ msg="${msg//$'\n'/ }"
+ printf "Value(s) out of range: %s\n" "${msg}"
+ exitcode=1
+ fi
+
+ #
+ # Rebuild the selection after having removed errors
+ #
+ selection=
+ if [[ ${sel+"${sel}"} ]]; then
+ selection="$(for i in ${sel}; do echo "$i"; done | sort -un)"
+ selection="${selection//$'\n'/ }"
+ fi
+ fi
+
+ #
+ # Return the result
+ #
+ result="$selection"
+
+ return $exitcode
+}
+
diff --git a/eps/hpr2493/hpr2493_full_shownotes.html b/eps/hpr2493/hpr2493_full_shownotes.html
new file mode 100755
index 0000000..daec869
--- /dev/null
+++ b/eps/hpr2493/hpr2493_full_shownotes.html
@@ -0,0 +1,162 @@
+
+
+
+
+
+
+
+ YouTube Subscriptions - update (HPR Show 2493)
+
+
+
+
+
+
+
+
+
YouTube Subscriptions - update (HPR Show 2493)
+
Dave Morriss
+
+
+
+
+
+
+
Table of Contents
+
+
+
Introduction
+
I reported on some of my YouTube subscriptions in show 2202, where I concentrated on the various Maker channels I subscribe to.
+
Since then I have added a few more such channels, but this time I also want to talk about some of the others I subscribe to.
+
YouTube Channels
+
I have collected details from the ‘About’ pages of each of the channels - when there is anything to collect. I have added my own notes about each channel too.
Comments: This is a new channel that I have recently subscribed to. I don’t know much about her history but Anne seems to be a talented maker and seems to be on a quest to learn many new skills, which she shares here. Not many episodes yet.
Description: “The trashiest electronic channel on YouTube. We test and use affordable electronic soldering equipment and tools to build, teardown, modify (and sometimes destroy) random electronic stuff.”
+
Comments: Clive is a Scot located on the Isle of Man. His videos cover all manner of electrical devices, tearing them down, repairing them and sometimes building them. His video output is prodigious!
Description: “Videos all about computers and computer stuff.”
+
Comments: Some fascinating episodes if you are interested in programming and Computer Science. Some of the computer history episodes have been really good. Professor Brian Kernighan (co-creator of AWK) has been interviewed many times on here.
Description: “Hey guys, we’re Evan and Katelyn and we make stuff together: woodworking, 3D printing, welding, props, CNC, home projects, sometimes practical stuff, sometimes totally unnecessary but fun stuff.”
+
Comments: I have only recently subscribed but am enjoying their often clever projects and their relaxed and friendly style.
Description: “ExplainingComputers uploads weekly computing videos. It is produced by Christopher Barnatt, who spent 25 years teaching computing and future studies in the University of Nottingham, and who is the author of thirteen books.”
+
Comments: Some really well done reviews and explanations of computer-related things. A lot about single board computers in recent times. The recent explanations of Quantum Computing and Blockchain have been very good.
Description: “HomeMade-Modern.com is an online design source that shares design ideas with hopes of inspiring people to make more of the things they own.”
+
Comments: Run by Ben Uyeda, sometimes with his sister Jessie. Some clever and unusual projects that can be inspiring.
Description: “Here you will find everything from money saving tips, woodworking projects, jigs and completely wild contraptions. I have been building and designing since I was old enough to swing a hammer. With a mind for out of the box thinking and an unusual mix of artfull design and engineering interest, you never know whats going to happen next.”
+
Comments: Some original and cool ideas. Izzy Swan is something of an inventor as well as a maker.
Description: “Home for woodworking, construction, upcycling, reclaiming, welding, epoxy, etc. My goals are to inspire you to make cool things, get people excited about making, and teaching while keeping it super fun and interesting.”
+
Comments: Some excellent videos of clever projects, often using reclaimed materials such as pallets. Usually quite funny too (often with fart jokes).
Description: “Hi I’m Neil and I like to make and create. This can be anything from woodwork/metalwork to photography/drawing or any other form of creation that motivates me. I have been inspired by watching many Youtube channels to have a crack at making my own videos. My workshop has many homemade tools that I have made from the inspiration of other Youtubers.”
+
Comments: The host is British and is now based in Australia. I have subscribed in the past year or so and am constantly impressed by the cleverness of his projects. This has grown to be one of my favourite channels.
Comments: A hobbyist woodworker and maker from Canada who often uses reclaimed materials. Not many videos on the channel but nevertheless some interesting stuff. (Also hosts on the Reclaimed Audio podcast)
Description: “I love creating videos of woodworking and welding projects. I will show the steps I take and the products I use when I make something out of wood or metal. I hope my videos inspire you to MAKE!”
+
Comments: I would like to learn to weld, and this channel inspires me with cleverly executed metalworking projects.
Description: “Everything about 3D Printing and Making! Build guides, tutorials, tips and reviews around the new generation of consumer and prosumer 3D printers (and more)!”
+
Comments: I’d like to own a 3D printer, and this channel seems to be a great source of information about these devices.
Description: “Sometimes you just can’t find what you want and have to make it yourself, or sometimes you just can’t afford it and have to make it yourself. You will find videos of wood working, metal working, brewing, gardening, and some reviews of tools that I find to be handy.”
+
Comments: The channel title is misleading. The host is a very skilled engineer who makes some impressive stuff, like his own CNC, and plasma cutter, and modifies and improves things. He also talks about gardening and sometimes cooking.
Description: “Watch me tell the story of how things I make come to life by using a video camera. Whenever possible, and my favorite, I like to use materials that are found, scraps, leftovers, or saved from a dumpster to build with.”
+
Comments: Projects often involving reclaimed materials. (Also hosts on the Reclaimed Audio podcast)
Description: “Music, Machines and Homemade Music Instruments! Watch this channel to learn about about how to make things and make music. Programmable Musical Marble Machine, Music Box and the Modulin is a few examples of what we have done previously. We are a Swedish Instrumental band and are currently building a new Marble Machine to go on a world tour with once functional. We are also recording a double album.”
+
Comments: The original Marble Machine really caught my attention. It was made of plywood, and played a vibrasphone, percussion and guitar by dropping steel “marbles”. It was programmable through Lego Technic pegs inserted into a large cylinder. See the Wikipedia article for details. The new one that they are building to take on tour is an amazing thing! They are documenting the build on this channel. I really enjoy the band’s music too.
+
+
diff --git a/eps/hpr2496/hpr2496_example_output.txt b/eps/hpr2496/hpr2496_example_output.txt
new file mode 100755
index 0000000..68c302e
--- /dev/null
+++ b/eps/hpr2496/hpr2496_example_output.txt
@@ -0,0 +1,24 @@
+$ what_pi
+Revision : 0010
+Release date : Q3 2014
+Model : B+
+PCB Revision : 1.0
+Memory : 512 MB
+Notes : (Mfg by Sony)
+Serial no : 00000000deadbeef
+
+Various configuration and other settings:
+CPU temp=39.0'C
+H264: H264=enabled
+MPG2: MPG2=disabled
+WVC1: WVC1=disabled
+MPG4: MPG4=enabled
+MJPG: MJPG=enabled
+WMV9: WMV9=disabled
+sdtv_mode=0
+sdtv_aspect=0
+
+Network information:
+ Hostname : rpi2
+ IP : 192.168.0.65
+ MAC : b8:27:eb:22:de:ad (eth0)
diff --git a/eps/hpr2496/hpr2496_full_shownotes.html b/eps/hpr2496/hpr2496_full_shownotes.html
new file mode 100755
index 0000000..63fd629
--- /dev/null
+++ b/eps/hpr2496/hpr2496_full_shownotes.html
@@ -0,0 +1,521 @@
+
+
+
+
+
+
+
+ Making a Raspberry Pi inventory (HPR Show 2496)
+
+
+
+
+
+
+
+
+
+
Making a Raspberry Pi inventory (HPR Show 2496)
+
Dave Morriss
+
+
+
+
+
+
+
Table of Contents
+
+
+
Introduction
+
I have a number of Raspberry Pis – possibly too many – and I sometimes lose track of which is which, what model, size, name, address each one is. I wanted to be able to keep an inventory of them all, and to this end I wrote myself a little script that can be run on any Pi which will report useful information about it.
+
Every Pi has a unique serial number. Actually it’s randomly generated so there may be a few collisions but it’s close to unique! It also contains a revision number which encodes various items of information about it such as release date, model, PCB revision and memory. My script decodes this revision number for you based on a published table.
+
I run a Wikimedia instance on a Pi and have used this script to record details of my Pis there as well as what they are being used for and any planned projects. I now feel more organised!
+
Script
+
The script is called what_pi and uses Bash. The master copy is available on my GitLab repository or can be downloaded from HPR (or archive.org if you are reading these notes there). It is a work in progress and contains various notes pointing out possible shortcomings.
+
The script is listed below, but I will be brief in my description of its features in this episode.
#!/bin/bash -
+#===============================================================================
+#
+# FILE: what_pi
+#
+# USAGE: ./what_pi
+#
+# DESCRIPTION: To be run on a RPi. Reports back what model it is. Uses info
+# from /proc/cpuinfo and a lookup table from
+# http://elinux.org/RPi_HardwareHistory
+#
+# OPTIONS: ---
+# REQUIREMENTS: ---
+# BUGS: ---
+# NOTES: ---
+# AUTHOR: Dave Morriss (djm), Dave.Morriss@gmail.com
+# VERSION: 0.0.2
+# CREATED: 2016-06-17 18:17:47
+# REVISION: 2017-04-10 14:45:32
+#
+#===============================================================================
+
+set -o nounset # Treat unset variables as an error
+
+SCRIPT=${0##*/}
+
+#=== FUNCTION ================================================================
+# NAME: network_info
+# DESCRIPTION: Reports some basic network information in a (hopefully)
+# generalised way.
+# TODO: Make it deal with multiple interfaces properly
+# PARAMETERS: None
+# RETURNS: Nothing
+#===============================================================================
+network_info () {
+ local d dev mac
+
+ echo "Network information:"
+ printf " %-11s: %s\n" "Hostname" "$(hostname -f)"
+ printf " %-11s: %s\n" "IP" "$(hostname -I)"
+ for d in /sys/class/net/*/address; do
+ dev="${d%/*}"
+ dev="${dev##*/}"
+ if [[ $dev != 'lo' ]]; then
+ mac="$(cat "$d")"
+ printf " %-11s: %s (%s)\n" "MAC" "$mac" "$dev"
+ fi
+ done
+}
+
+#=== FUNCTION ================================================================
+# NAME: settings_info
+# DESCRIPTION: Reports stuff about settings and config file elements
+# PARAMETERS: None
+# RETURNS: Nothing
+#===============================================================================
+settings_info () {
+ local codec
+
+ #
+ # Is the user in the 'video' group?
+ #
+ if id -Gn | grep -q 'video'; then
+ echo "Various configuration and other settings:"
+ echo "CPU $(vcgencmd measure_temp)"
+ for codec in H264 MPG2 WVC1 MPG4 MJPG WMV9; do
+ echo -e "$codec:\t$(vcgencmd codec_enabled $codec)"
+ done
+ vcgencmd get_config sdtv_mode
+ vcgencmd get_config sdtv_aspect
+ else
+ echo "Can't run 'vgencmd'; you're not in the 'video' group"
+ fi
+}
+
+#=== FUNCTION ================================================================
+# NAME: cleanup_temp
+# DESCRIPTION: Cleanup temporary files in case of a keyboard interrupt
+# (SIGINT) or a termination signal (SIGTERM) and at script
+# exit
+# PARAMETERS: * - names of temporary files to delete
+# RETURNS: Nothing
+#===============================================================================
+function cleanup_temp {
+ for tmp in "$@"; do
+ [ -e "$tmp" ] && rm --force "$tmp"
+ done
+ exit 0
+}
+
+#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+#
+# Are we on a Pi at all?
+# TODO: Check this. It works on all my machines, but may not work everywhere
+#
+model=$(grep -m 1 '^model name' /proc/cpuinfo | cut -f2 -d:)
+re="ARMv[0-9]"
+if [[ ! $model =~ $re ]]; then
+ echo "This doesn't seem to be a Raspberry Pi"
+ exit 1
+fi
+
+#
+# Make temporary files and set traps to delete them
+#
+TMP1=$(mktemp) || { echo "$SCRIPT: creation of temporary file failed!"; exit 1; }
+TMP2=$(mktemp) || { echo "$SCRIPT: creation of temporary file failed!"; exit 1; }
+trap 'cleanup_temp $TMP1 $TMP2' SIGHUP SIGINT SIGPIPE SIGTERM EXIT
+
+#
+# Create a table of Pi stuff. Copied from http://elinux.org/RPi_HardwareHistory
+# using simple cut and paste. The result is a table separated by tabs, and
+# this script relies on this fact.
+# You will have to refresh this every time a new Pi model is released. This
+# version is dated Q1 2017 and includes the Pi Zero W
+#
+cat > "$TMP1" <<'ENDTABLE'
+Revision Release Date Model PCB Revision Memory Notes
+Beta Q1 2012 B (Beta) ? 256 MB Beta Board
+0002 Q1 2012 B 1.0 256 MB
+0003 Q3 2012 B (ECN0001) 1.0 256 MB Fuses mod and D14 removed
+0004 Q3 2012 B 2.0 256 MB (Mfg by Sony)
+0005 Q4 2012 B 2.0 256 MB (Mfg by Qisda)
+0006 Q4 2012 B 2.0 256 MB (Mfg by Egoman)
+0007 Q1 2013 A 2.0 256 MB (Mfg by Egoman)
+0008 Q1 2013 A 2.0 256 MB (Mfg by Sony)
+0009 Q1 2013 A 2.0 256 MB (Mfg by Qisda)
+000d Q4 2012 B 2.0 512 MB (Mfg by Egoman)
+000e Q4 2012 B 2.0 512 MB (Mfg by Sony)
+000f Q4 2012 B 2.0 512 MB (Mfg by Qisda)
+0010 Q3 2014 B+ 1.0 512 MB (Mfg by Sony)
+0011 Q2 2014 Compute Module 1 1.0 512 MB (Mfg by Sony)
+0012 Q4 2014 A+ 1.1 256 MB (Mfg by Sony)
+0013 Q1 2015 B+ 1.2 512 MB ?
+0014 Q2 2014 Compute Module 1 1.0 512 MB (Mfg by Embest)
+0015 ? A+ 1.1 256 MB / 512 MB (Mfg by Embest)
+a01040 Unknown 2 Model B 1.0 1 GB (Mfg by Sony)
+a01041 Q1 2015 2 Model B 1.1 1 GB (Mfg by Sony)
+a21041 Q1 2015 2 Model B 1.1 1 GB (Mfg by Embest)
+a22042 Q3 2016 2 Model B (with BCM2837) 1.2 1 GB (Mfg by Embest)
+900021 Q3 2016 A+ 1.1 512 MB (Mfg by Sony)
+900092 Q4 2015 Zero 1.2 512 MB (Mfg by Sony)
+900093 Q2 2016 Zero 1.3 512 MB (Mfg by Sony)
+920093 Q4 2016? Zero 1.3 512 MB (Mfg by Embest)
+9000C1 Q1 2017 Zero W 1.1 512 MB (Mfg by Sony)
+a02082 Q1 2016 3 Model B 1.2 1 GB (Mfg by Sony)
+a020a0 Q1 2017 Compute Module 3 (and CM3 Lite) 1.0 1 GB (Mfg by Sony)
+a22082 Q1 2016 3 Model B 1.2 1 GB (Mfg by Embest)
+a32082 Q4 2016 3 Model B 1.2 1 GB (Mfg by Sony Japan)
+ENDTABLE
+
+#
+# Grab two values from the /proc/cpuinfo file
+#
+REV="$(grep '^Revision' /proc/cpuinfo | awk '{print $3}' | sed 's/^1000//')"
+SER="$(grep '^Serial' /proc/cpuinfo | awk '{print $3}')"
+
+#
+# Make an Awk script which finds the details in the above table and displays
+# them
+#
+cat > "$TMP2" <<'ENDPROG'
+tolower($0) ~ rev {
+ printf "%-13s: %s\n%-13s: %s\n%-13s: %s\n%-13s: %s\n%-13s: %s\n%-13s: %s\n%-13s: %s\n",
+ "Revision",$1,
+ "Release date",$2,
+ "Model",$3,
+ "PCB Revision",$4,
+ "Memory",$5,
+ "Notes",$6,
+ "Serial no",serial
+}
+ENDPROG
+
+#
+# Run Awk on the table with the above script, passing the revision number as
+# a regular expression for searching, and the serial number as a simple
+# string.
+#
+awk -v "rev=^$REV" -v "serial=$SER" -F" *\t *" -f "$TMP2" "$TMP1"
+
+#
+# Report various settings and parameters
+#
+echo
+settings_info
+
+#
+# Report network information
+#
+echo
+network_info
+
+exit
+
+# vim: syntax=sh:ts=8:sw=4:ai:et:tw=78:fo=tcrqn21
+
The process of identifying important features like the revision, release date and model of the Pi is achieved by searching a table of data. The table originates from the website http://elinux.org/RPi_HardwareHistory and has just been copied and pasted into this script. Whenever new Pis are released and the website is updated it will be necessary to refresh this table.
+
If you do this make sure that the tab characters used in this table are preserved since they are used as the field delimiter.
+
The searching of the table and display of results is performed using Awk, with a program that is stored in a temporary file and run on lines 163-181.
+
The script tries to check that it is running on a Pi in the code on lines 97-102. This works for me, but may not be universal.
+
Lines 107-109 contain commands that create temporary files and set up a mechanism to delete them using the trap command. The function cleanup_temp is used to delete the files, and that is defined on lines 76-89. I plan to talk about trap a forthcoming episode on the way Bash works.
+
Any improvements to this script are welcome. Please submit a pull request to the GitLab repository.
+
Example output
+
This file (example_output.txt) can be downloaded if desired. See the Links section below.
+
$ what_pi
+Revision : 0010
+Release date : Q3 2014
+Model : B+
+PCB Revision : 1.0
+Memory : 512 MB
+Notes : (Mfg by Sony)
+Serial no : 00000000deadbeef
+
+Various configuration and other settings:
+CPU temp=39.0'C
+H264: H264=enabled
+MPG2: MPG2=disabled
+WVC1: WVC1=disabled
+MPG4: MPG4=enabled
+MJPG: MJPG=enabled
+WMV9: WMV9=disabled
+sdtv_mode=0
+sdtv_aspect=0
+
+Network information:
+ Hostname : rpi2
+ IP : 192.168.0.65
+ MAC : b8:27:eb:22:de:ad (eth0)
+
+
+
diff --git a/eps/hpr2496/hpr2496_what_pi b/eps/hpr2496/hpr2496_what_pi
new file mode 100755
index 0000000..5e73e57
--- /dev/null
+++ b/eps/hpr2496/hpr2496_what_pi
@@ -0,0 +1,197 @@
+#!/bin/bash -
+#===============================================================================
+#
+# FILE: what_pi
+#
+# USAGE: ./what_pi
+#
+# DESCRIPTION: To be run on a RPi. Reports back what model it is. Uses info
+# from /proc/cpuinfo and a lookup table from
+# http://elinux.org/RPi_HardwareHistory
+#
+# OPTIONS: ---
+# REQUIREMENTS: ---
+# BUGS: ---
+# NOTES: ---
+# AUTHOR: Dave Morriss (djm), Dave.Morriss@gmail.com
+# VERSION: 0.0.2
+# CREATED: 2016-06-17 18:17:47
+# REVISION: 2017-04-10 14:45:32
+#
+#===============================================================================
+
+set -o nounset # Treat unset variables as an error
+
+SCRIPT=${0##*/}
+
+#=== FUNCTION ================================================================
+# NAME: network_info
+# DESCRIPTION: Reports some basic network information in a (hopefully)
+# generalised way.
+# TODO: Make it deal with multiple interfaces properly
+# PARAMETERS: None
+# RETURNS: Nothing
+#===============================================================================
+network_info () {
+ local d dev mac
+
+ echo "Network information:"
+ printf " %-11s: %s\n" "Hostname" "$(hostname -f)"
+ printf " %-11s: %s\n" "IP" "$(hostname -I)"
+ for d in /sys/class/net/*/address; do
+ dev="${d%/*}"
+ dev="${dev##*/}"
+ if [[ $dev != 'lo' ]]; then
+ mac="$(cat "$d")"
+ printf " %-11s: %s (%s)\n" "MAC" "$mac" "$dev"
+ fi
+ done
+}
+
+#=== FUNCTION ================================================================
+# NAME: settings_info
+# DESCRIPTION: Reports stuff about settings and config file elements
+# PARAMETERS: None
+# RETURNS: Nothing
+#===============================================================================
+settings_info () {
+ local codec
+
+ #
+ # Is the user in the 'video' group?
+ #
+ if id -Gn | grep -q 'video'; then
+ echo "Various configuration and other settings:"
+ echo "CPU $(vcgencmd measure_temp)"
+ for codec in H264 MPG2 WVC1 MPG4 MJPG WMV9; do
+ echo -e "$codec:\t$(vcgencmd codec_enabled $codec)"
+ done
+ vcgencmd get_config sdtv_mode
+ vcgencmd get_config sdtv_aspect
+ else
+ echo "Can't run 'vgencmd'; you're not in the 'video' group"
+ fi
+}
+
+#=== FUNCTION ================================================================
+# NAME: cleanup_temp
+# DESCRIPTION: Cleanup temporary files in case of a keyboard interrupt
+# (SIGINT) or a termination signal (SIGTERM) and at script
+# exit
+# PARAMETERS: * - names of temporary files to delete
+# RETURNS: Nothing
+#===============================================================================
+function cleanup_temp {
+ for tmp in "$@"; do
+ [ -e "$tmp" ] && rm --force "$tmp"
+ done
+ exit 0
+}
+
+#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+#
+# Are we on a Pi at all?
+# TODO: Check this. It works on all my machines, but may not work everywhere
+#
+model=$(grep -m 1 '^model name' /proc/cpuinfo | cut -f2 -d:)
+re="ARMv[0-9]"
+if [[ ! $model =~ $re ]]; then
+ echo "This doesn't seem to be a Raspberry Pi"
+ exit 1
+fi
+
+#
+# Make temporary files and set traps to delete them
+#
+TMP1=$(mktemp) || { echo "$SCRIPT: creation of temporary file failed!"; exit 1; }
+TMP2=$(mktemp) || { echo "$SCRIPT: creation of temporary file failed!"; exit 1; }
+trap 'cleanup_temp $TMP1 $TMP2' SIGHUP SIGINT SIGPIPE SIGTERM EXIT
+
+#
+# Create a table of Pi stuff. Copied from http://elinux.org/RPi_HardwareHistory
+# using simple cut and paste. The result is a table separated by tabs, and
+# this script relies on this fact.
+# You will have to refresh this every time a new Pi model is released. This
+# version is dated Q1 2017 and includes the Pi Zero W
+#
+cat > "$TMP1" <<'ENDTABLE'
+Revision Release Date Model PCB Revision Memory Notes
+Beta Q1 2012 B (Beta) ? 256 MB Beta Board
+0002 Q1 2012 B 1.0 256 MB
+0003 Q3 2012 B (ECN0001) 1.0 256 MB Fuses mod and D14 removed
+0004 Q3 2012 B 2.0 256 MB (Mfg by Sony)
+0005 Q4 2012 B 2.0 256 MB (Mfg by Qisda)
+0006 Q4 2012 B 2.0 256 MB (Mfg by Egoman)
+0007 Q1 2013 A 2.0 256 MB (Mfg by Egoman)
+0008 Q1 2013 A 2.0 256 MB (Mfg by Sony)
+0009 Q1 2013 A 2.0 256 MB (Mfg by Qisda)
+000d Q4 2012 B 2.0 512 MB (Mfg by Egoman)
+000e Q4 2012 B 2.0 512 MB (Mfg by Sony)
+000f Q4 2012 B 2.0 512 MB (Mfg by Qisda)
+0010 Q3 2014 B+ 1.0 512 MB (Mfg by Sony)
+0011 Q2 2014 Compute Module 1 1.0 512 MB (Mfg by Sony)
+0012 Q4 2014 A+ 1.1 256 MB (Mfg by Sony)
+0013 Q1 2015 B+ 1.2 512 MB ?
+0014 Q2 2014 Compute Module 1 1.0 512 MB (Mfg by Embest)
+0015 ? A+ 1.1 256 MB / 512 MB (Mfg by Embest)
+a01040 Unknown 2 Model B 1.0 1 GB (Mfg by Sony)
+a01041 Q1 2015 2 Model B 1.1 1 GB (Mfg by Sony)
+a21041 Q1 2015 2 Model B 1.1 1 GB (Mfg by Embest)
+a22042 Q3 2016 2 Model B (with BCM2837) 1.2 1 GB (Mfg by Embest)
+900021 Q3 2016 A+ 1.1 512 MB (Mfg by Sony)
+900092 Q4 2015 Zero 1.2 512 MB (Mfg by Sony)
+900093 Q2 2016 Zero 1.3 512 MB (Mfg by Sony)
+920093 Q4 2016? Zero 1.3 512 MB (Mfg by Embest)
+9000C1 Q1 2017 Zero W 1.1 512 MB (Mfg by Sony)
+a02082 Q1 2016 3 Model B 1.2 1 GB (Mfg by Sony)
+a020a0 Q1 2017 Compute Module 3 (and CM3 Lite) 1.0 1 GB (Mfg by Sony)
+a22082 Q1 2016 3 Model B 1.2 1 GB (Mfg by Embest)
+a32082 Q4 2016 3 Model B 1.2 1 GB (Mfg by Sony Japan)
+ENDTABLE
+
+#
+# Grab two values from the /proc/cpuinfo file
+#
+REV="$(grep '^Revision' /proc/cpuinfo | awk '{print $3}' | sed 's/^1000//')"
+SER="$(grep '^Serial' /proc/cpuinfo | awk '{print $3}')"
+
+#
+# Make an Awk script which finds the details in the above table and displays
+# them
+#
+cat > "$TMP2" <<'ENDPROG'
+tolower($0) ~ rev {
+ printf "%-13s: %s\n%-13s: %s\n%-13s: %s\n%-13s: %s\n%-13s: %s\n%-13s: %s\n%-13s: %s\n",
+ "Revision",$1,
+ "Release date",$2,
+ "Model",$3,
+ "PCB Revision",$4,
+ "Memory",$5,
+ "Notes",$6,
+ "Serial no",serial
+}
+ENDPROG
+
+#
+# Run Awk on the table with the above script, passing the revision number as
+# a regular expression for searching, and the serial number as a simple
+# string.
+#
+awk -v "rev=^$REV" -v "serial=$SER" -F" *\t *" -f "$TMP2" "$TMP1"
+
+#
+# Report various settings and parameters
+#
+echo
+settings_info
+
+#
+# Report network information
+#
+echo
+network_info
+
+exit
+
+# vim: syntax=sh:ts=8:sw=4:ai:et:tw=78:fo=tcrqn21
diff --git a/eps/hpr2505/hpr2505_full_shownotes.html b/eps/hpr2505/hpr2505_full_shownotes.html
new file mode 100755
index 0000000..45c582e
--- /dev/null
+++ b/eps/hpr2505/hpr2505_full_shownotes.html
@@ -0,0 +1,234 @@
+
+
+
+
+
+
+
+ The power of GNU Readline - part 3 (HPR Show 2505)
+
+
+
+
+
+
+
+
+
The power of GNU Readline - part 3 (HPR Show 2505)
+
Dave Morriss
+
+
+
+
+
+
+
Table of Contents
+
+
+
Terminology
+
The GNU Readline manual uses the terms point, mark and region, which need definition. If you follow some of the links to this manual in this series you will encounter these terms, and I think they can be confusing.
+
+
point
+
The current cursor position (we have simply referred to it as the cursor position so far in this series). Also called the insertion point.
+
+
mark
+
A cursor position saved by the set-mark command (we’ll look at this in a forthcoming episode)
+
+
region
+
The text between point and mark (also for a future episode)
+
+
+
In this series I will try not to use these terms without an explanation or reminder of what they mean. I will be looking at these and commands that affect them more in later episodes.
+
Readline Arguments
+
Readline commands (which are what are being invoked by the key sequences we have seen so far) can take numeric arguments. Sometimes the argument acts as a repeat count, other times it is the sign of the argument that is significant.
+
The argument itself begins with the Meta key, pressed in conjunction with a digit. If a multi-digit number is required this followed by further digits without the Meta key. The first “digit” can be a minus sign (‘-’) if the numeric argument is to be negative.
+
For example, to repeat the C-d (Control-d) command 10 times type: M-10C-d (Meta-10Control-d). This will delete forward 10 characters.
+
A negative argument reverses the effect of the command, so M--C-k (Meta--Control-k), instead of deleting characters forward to the end of the line deletes them backwards to the start of the line.
+
Reminder
+
Some of the sequences we are looking at in this series can be intercepted and interpreted by:
+
+
The terminal
+
The desktop environment
+
+
For example, I have been testing the key sequences used in these episodes using the Terminator terminal emulator on XFCE and I have found M-l (see later) was interpreted by Terminator and could not be typed.
+
Remember that all meta key sequences can be entered as Esckey – press the Esc key then press the second key (l in this particular instance).
+
More character and word manipulations (and others)
+
Let’s get into some more Readline key sequences.
+
Commenting out a line
+
This is something I often do. I’m typing a complex command and I want to save what I’m doing and check something, or answer the phone perhaps. I used to hit the Home key or C-a and prefix the line with a # comment character then hit Return. The line is in the history and can be recalled, continued and executed after the comment has been removed.
+
There are Readline sequences that can help:
+
+
M-# (Meta-#)
+
Add a comment symbol to the start of the line and issue a Return.
+
+
M-1M-# (Meta-1Meta-#)
+
If the current line begins with a comment remove it and execute the line with a Return. (If it doesn’t begin with # then add one in the same way as M-#)
+
+
+
The second sequence is actually M-# with an argument and any argument will have the same effect. So M-0M-# would also remove the comment and enter the command. This sequence is actually a “toggle” which adds a comment if there isn’t one and removes it if there is.
+
The comment character can be changed (which we’ll discuss in a later episode when we look at the Readline configuration file), which can be of relevance if Readline is being used in an application where a different comment character is required.
+
Example
+
+
Type the Bash command: echo "Hello" but don’t press Return.
+
Type M-#. The command turns into #echo "Hello" and the line is accepted as if Return had been pressed.
+
Recall the line with the up arrow key.
+
Type M-1M-#; the comment character is removed and the line accepted.
+
+
Transpose characters
+
I’m a bad typist. I so often type words like directroy with transposed letters. However, Readline offers a facility to correct such errors:
+
+
C-t (Control-t)
+
Transpose characters. Swap the character before the cursor (point) with that under the cursor, then move the cursor to the right.
+
+
+
Example
+
After typing the word incorrectly position the cursor as shown:
+
directroy
+ ^
+
Press C-t and the ‘o’ and ‘r’ are transposed and the cursor moves to the ‘y’.
+
directory
+ ^
+
If the insertion point is at the end of the line, then this transposes the last two characters of the line.
You can also transpose words, where a word is defined as we have discussed earlier in the series, a series of letters and digits.
+
+
M-t (Meta-t)
+
Transpose words. The cursor (point) can be anywhere in a word (or just before it). It and the word before it are swapped and the cursor is left after the pair of words. If there is no word before the word the cursor is on then nothing happens. If the cursor is at the end of the line the last word and the word before it are swapped – repeatedly for every M-t.
+
+
+
Example
+
You think that split infinitives are bad:
+
echo "to boldly go where..."
+ ^
+
Press M-t and the result is:
+
echo "to go boldly where..."
+ ^
+
Note that, even though the cursor is not on a word transposition still takes place. Press M-t again and the result is:
+
echo "to go where boldly..."
+ ^
+
Change the case of words
+
Readline allows you to change the case of whole words, to upper case, to lower case or change the case of the first letter of a word to upper case (capitalise it).
+
+
M-u (Meta-u)
+
Uppercase the current (or following) word. With a negative argument, uppercase the previous word, but do not move the cursor.
+
+
M-l (Meta-l)
+
Lowercase the current (or following) word. With a negative argument, lowercase the previous word, but do not move the cursor.
+
+
M-c (Meta-c)
+
Capitalise the current (or following) word. With a negative argument, capitalise the previous word, but do not move the cursor.
+
+
+
To change the case of a whole word the cursor must be before the word or on its first letter. If it is part-way through the word then the rest of the word from the cursor to the end of the word is changed.
+
For capitalisation the situation is similar. The capital is at the start of the following word, or it occurs where the cursor is positioned. The rest of the word is lowercased.
+
Examples
+
1. Upper and lower case
+
Given the following command with the cursor positioned as shown (NB: type the line and press M-b three times to move three words backward):
+
echo "hacker public radio"
+ ^
+
Press M-u and the result is:
+
echo "HACKER public radio"
+ ^
+
The current word has been changed to upper case and the cursor moved after it. Press M-u again and the result is:
+
echo "HACKER PUBLIC radio"
+ ^
+
The following word has been changed to upper case and the cursor moved after it. Press M--M-l (Meta--Meta-l) (remember the simplest negative argument is achieved by pressing the Meta key in conjunction with a dash) and the result is:
+
echo "HACKER public radio"
+ ^
+
The previous word has been changed to lower case but the cursor has not been moved.1
+
Regarding how much of a word is changed:
+
echo "hacker public radio"
+ ^
+
Pressing M-u here gives the following result:
+
echo "hacKER public radio"
+ ^
+
2. Capitalisation
+
Using another command:
+
echo "the capital of scotland is edinburgh"
+ ^
+
Press M-b six times then press M-c, M-f twice, and M-c the result is:
+
echo "The capital of Scotland is edinburgh"
+ ^
+
Press M-f and M-c the result is:
+
echo "The capital of Scotland is Edinburgh"
+ ^
+
What was going on here should be self-evident from the previous episodes in this series! ☺
+
Revert the line
+
We saw the undo command in episode 1 of this series: C-_ (Control-_) or C-xC-u (Control-xControl-u) but there is a short-cut that undoes all changes. The following description is copied from the GNU Readline manual, section 1.4.8.
+
+
M-r (Meta-r)
+
Undo all changes made to this line. This is like executing the undo command enough times to get back to the beginning.
+
+
+
Example
+
I don’t use this sequence often. I experimenting with it for this episode I did not find the description particularly useful so I spent a bit longer looking into it.
+
First the undo command: this will revert individual steps, as mentioned. Given the following line containing a command:
+
echo "Star Wars"
+ ^
+
Move back to the ‘r’ in ‘Star’ and press C-t:
+
echo "Stra Wars"
+ ^
+
Move forwards to the ‘W’ in ‘Wars’ and press C-t:
+
echo "StraW ars"
+ ^
+
Now press C-_ to perform one undo:
+
echo "Stra Wars"
+ ^
+
And again:
+
echo "Star Wars"
+ ^
+
We are back to the original state except that the cursor is in a different place. Pressing C-_ again results in a blank line.
+
Given the same starting point, pressing M-r results in the blank line – all of the changes are undone including the typing of the command in the first place.
+
If you recall a command from the history, neither of these key sequences do anything because that is the original state of the line. The line can be deleted with C-u (which kills the entire line as we saw in episode 2).
+
Also, if the recalled line is edited, individual edits can be reverted with C-_ or all of them with M-r, but only back to the state of the line that was recalled.
In the audio I wondered whether M-3 followed by M-u would uppercase the next three words to the right, and I later found that it does, and it works with M-l and M-c.
+
+So pressing M-3M-c with the cursor positioned on the ‘h’ of ‘hacker’ in echo "hacker public radio" results in: echo "Hacker Public Radio"↩
+
+
+
+
+
+
+
diff --git a/eps/hpr2526/hpr2526_awk10_ex1.awk b/eps/hpr2526/hpr2526_awk10_ex1.awk
new file mode 100755
index 0000000..a141878
--- /dev/null
+++ b/eps/hpr2526/hpr2526_awk10_ex1.awk
@@ -0,0 +1,15 @@
+#!/usr/bin/awk -f
+{
+ a[l] = $0
+ l++
+ print NR" "$0
+}
+END{
+ print "Numeric subscripts:"
+ for (i = l - 1; i >= 0; i--)
+ print i": "a[i]
+
+ print "Actual subscripts:"
+ for (i in a)
+ print i": "a[i]
+}
diff --git a/eps/hpr2526/hpr2526_awk10_ex2.awk b/eps/hpr2526/hpr2526_awk10_ex2.awk
new file mode 100755
index 0000000..ac1e2e5
--- /dev/null
+++ b/eps/hpr2526/hpr2526_awk10_ex2.awk
@@ -0,0 +1,12 @@
+#!/usr/bin/awk -f
+{
+ lines[NR] = $0
+}
+
+END{
+ for (i in lines) {
+ split(lines[i],flds,/ *, */,seps)
+ for (j in flds)
+ printf "|%s| (%s)\n",flds[j],seps[j]
+ }
+}
diff --git a/eps/hpr2526/hpr2526_full_shownotes.epub b/eps/hpr2526/hpr2526_full_shownotes.epub
new file mode 100755
index 0000000..6764467
Binary files /dev/null and b/eps/hpr2526/hpr2526_full_shownotes.epub differ
diff --git a/eps/hpr2526/hpr2526_full_shownotes.html b/eps/hpr2526/hpr2526_full_shownotes.html
new file mode 100755
index 0000000..b8a513a
--- /dev/null
+++ b/eps/hpr2526/hpr2526_full_shownotes.html
@@ -0,0 +1,325 @@
+
+
+
+
+
+
+
+ Gnu Awk - Part 10 (HPR Show 2526)
+
+
+
+
+
+
+
+
+
Gnu Awk - Part 10 (HPR Show 2526)
+
Dave Morriss
+
+
+
+
+
+
+
Table of Contents
+
+
+
Introduction
+
This is the tenth episode of the "Learning Awk" series which is being produced by Mr. Young and myself.
+
In this episode I want to talk more about the use of arrays in GNU Awk and then I want to examine some real-world examples of the use of awk.
+
A bit more about arrays
+
A recap
+
We know from earlier in the series that arrays in awk are associative. That is, the index used to refer to an element is a string. The contents of each array element may be a number or a string (or nothing). An associative array is also called a hash. An array index is also referred to as a subscript.
+
We also know that array elements are referred to with expressions such as:
+
array[index]
+
so, fruit["apple"] means the element of the array fruit which is indexed by the string "apple". The index value is actually an expression, so it can be arbitrarily complex, such as:
+
ind1 = "app"
+ind2 = "le"
+print fruit[ind1 ind2]
+
Here the two strings "app" and "le" are concatenated to make the index "apple".
+
We saw earlier in the series that the presence of an array element is checked with an expression using:
+
indexinarray
+
So an example might be:
+
if ("apple" in fruit)
+ print fruit["apple"]
+
Looping through the elements of an array is achieved with the specialised for statement as we saw in an earlier episode:
+
for (ind in fruit)
+ print fruit[ind]
+
Using numbers as array subscripts
+
In awk array subscripts are always strings. If a number is used then this is converted into a string. This is not a problem with statements like the following:
+
data[42] = 8388607
+
The integer number 42 is converted into the string "42" and everything works as normal.
+
However, awk can handle other number bases. For example, in common with many other programming languages, a leading zero denotes an octal number, making data[052] the same as data[42] (because decimal 42 is octal 52).
+
Also data[0x2A] is the same as data[42] because hexadecimal 2A is decimal 42.
+
The way in which numbers are converted into strings in awk is important to understand. A built-in variable called CONVFMT defines the conversion for floating point numbers. Behind the scenes the function sprintf is used. (This is like printf which we saw in episode 9, but it returns a formatted string rather than printing anything.)
+
The default value for CONVFMT is "%.6g" which means (according to the manual) to print a number in either scientific notation or in floating-point notation, whichever uses fewer characters. The number 6 aims to format the number in a width of 6 characters (plus the decimal point). The setting of CONVFMT can be adjusted in the script if desired.
+
Knowing this the index can be determined in cases like this:
The thing to be careful of is adjusting CONVFMT between storing and retrieving an array element!
+
What if the subscript is uninitialised?
+
The GNU Awk User’s Guide mentions this. An uninitialised variable treated as a number is zero, but treated as a string is a null string "". The following script is in the file awk10_ex1.awk which may be downloaded:
+
#!/usr/bin/awk -f
+{
+ a[l] = $0
+ l++
+ print NR" "$0
+}
+END{
+ print "Numeric subscripts:"
+ for (i = l - 1; i >= 0; i--)
+ print i": "a[i]
+
+ print "Actual subscripts:"
+ for (i in a)
+ print i": "a[i]
+}
+
This can lead to unexpected results:
+
$ echo -e "A\nB\nC" | ./awk10_ex1.awk
+1 A
+2 B
+3 C
+Numeric subscripts:
+2: C
+1: B
+0:
+Actual subscripts:
+: A
+0:
+1: B
+2: C
+
The variable l is used as the index to the array a. It is uninitialised the first time it is used so the string it provides is an empty string, which is a valid array index. Then it is incremented and it then takes numeric values. The main rule prints each line as it receives it just to prove it’s actually seeing all three lines.
+
In the END rule the array is printed (in reverse order) using numeric indexes 2, 1 and zero. There is nothing in element zero.
+
Then the array is printed again using the "index in array" method. Notice how the letter A is there with an empty index. Notice also that there is an element with index zero too. That was created in the previous loop since accessing a non-existent array element creates it!
+
Had the two lines in the main rule been replaced as shown the outcome would have been more predictable:
+
a[l] = $0
+ l++
+
Replacement:
+
a[l++] = $0
+
Remembering that l++ returns the value of l then increments it, this forces the first value returned to be zero because it is a numeric expression.
+
Deleting array elements
+
There is a delete statement which can delete a given array element. For example, in the above demonstration of subscript issues, the spurious element could have been deleted with:
+
delete a[0]
+
The generic format is:
+
deletearray[index]
+
We already saw that array elements with empty subscripts or empty values can exist in an array, so we know that making an element empty does not delete it.
+
An entire array can be deleted with the generic statement:
+
deletearray
+
The array remains declared but is empty, so re-using its name as an ordinary (scalar) variable after using delete on it will result in an error.
+
Splitting strings into arrays
+
There are two functions in awk which generate arrays from strings by splitting them up by some criterion. The functions are: split and patsplit. We will look at split in this episode and patsplit in a subsequent one.
+
split
+
The general format of the split function is:
+
split(string, array [ , fieldsep [ , seps ] ])
+
The first two arguments are mandatory but the second two are optional.
+
The function divides string into pieces separated by fieldsep and stores the pieces in array and the separator strings in the seps array (a GNU Awk extension).
+
Successive pieces are placed in array[1], array[2], and so on. The array is emptied before the splitting begins.
+
If fieldsep is omitted then the value of the built-in variable FS is used, so split can be seen as a method of generating fields from a string in a similar way to the main field processing that awk performs. If fieldsep is provided than it is a regular expression (again in the same way as FS).
+
The seps array is used to hold each of the separators. If fieldsep is a single space then any leading white space goes into seps[0] and any trailing white space goes into seps[n], where n is the number of number of elements in array.
+
The function split returns the number of pieces placed in array.
+
Example of using split
+
The following script is in the file awk10_ex2.awk which may be downloaded:
+
#!/usr/bin/awk -f
+{
+ lines[NR] = $0
+}
+
+END{
+ for (i in lines) {
+ split(lines[i],flds,/ *, */,seps)
+ for (j in flds)
+ printf "|%s| (%s)\n",flds[j],seps[j]
+ }
+}
+
It reads lines into an array called lines using the record number as the index. In the END rule it processes this array, splitting each line into another array called flds and the separators into an array called seps.
+
The fieldsep value is a regular expression consisting of a comma surrounded by any number of spaces. The flds array is printed in delimiters to demonstrate that any leading and trailing spaces have been removed. The seps array is appended to each output line enclosed in parentheses so you can see what was captured there.
The following example scripts are not specifically about the use of arrays in awk. This is more of an attempt to demonstrate some real-world awk scripts for reference.
+
Scanning a log file
+
I have a script I wrote to add tags and summaries to HPR episodes that have none. I seem to mention this project every month on the Community News show! The script receives email messages with updates, and keeps a log with lines that look like this as it processes them:
Note: if you are wondering about the times they are local to the server, based in California USA, on which the script is run. I run things from the UK timezone (UTC or UTC+1).
+
I like to add a report on the number of tags and summaries processed each month to the Community News show notes, so I wanted to scan this log file for the month’s total.
+
Originally I used a pipeline with grep and wc but the task is well suited to awk. This was my solution (with added line numbers for reference):
In the BEGIN (lines 2-5) rule a regular expression is defined in the variable re.
+
+
This starts with a ‘^’ character which anchors the expression to the start of the line.
+
This is followed by part of the date generated with the built-in function strftime. Here we generate the current year and the current month number and a slash.
+
Two dots follow which cater for the day number, then there is a space and ‘.*’ meaning zero or more characters.
+
This is followed by a space then between one and four digits. This matches the show number after the ‘[INFO]’ part.
+
The expression ends with a colon which matches the one after the show number
+
+
In the rule a variable count is initialised to zero (not strictly necessary but good programming practice)
+
The main rule for processing the input file (lines 6-8) matches each line against the regular expression. If it matches the line is printed preceded by the current value of count (which is pre-incremented before being printed).
+
The END rule (lines 9-11) prints the final value of count.
+
+
Running this towards the end of February 2018 we get:
Of course, I would not run this awk script on the command line as shown here. I’d place it in a Bash script to simplify the typing, but I will not demonstrate that here.
+
Parsing a tab-delimited file
+
I am currently looking after the process of uploading HPR episodes to the Internet Archive (IA) - archive.org. To manage this I use a Python library called internetarchive and a command line tool called ia. The ia tool lets me interrogate the archive, returning data about shows that have been uploaded as well as allowing me to upload and change them.
+
In some cases I find it necessary to replace the audio formats which have been generated automatically by archive.org with copies generated by the HPR software. This is because we want to ensure these audio files contain metadata (audio tags). The shows generated by archive.org are converted from the WAV file we upload in a process referred to as derivation, and contain no metadata.
+
I needed to be able to tell which HPR episodes had derived audio and which had original audio. The ia tool could do this but in a format which was difficult to parse, so I wrote an awk script to do it for me.
+
The data I needed to parse consists of tab-delimited lines. The first line contains the names of all of the columns. However, some the columns were not always present or were in different orders, so this required a little more work to parse.
+
Here is a sample of the input file format:
+
$ ia list -va hpr2450 | head -3
+name sha1 format btih height source length width mtime crc32 size bitrate original md5
+hpr2450.afpk b71f63ef1e8c359b3f0f7a546835919a8a7889da Columbia Peaks derivative 1513450216 656e162d 107184 hpr2450.wav 0ace3e0ae96510a85bee6dda3b69ab78
+hpr2450.flac cd917c46eaf22f0ec0253bd018b475380e83ce7e Flac 0 derivative 738.08 0 1515280267 e7934979 27556168 hpr2450.wav 7a9b716932b33a2e6713ae3f4e23d24d
+
The following script, called parse_ia_audio.awk, was what I produced to parse this data.
+
#!/usr/bin/awk -f
+
+#-------------------------------------------------------------------------------
+# Process tab-delimited data from the Internet Archive with a field name
+# header, reporting particular fields. The algorithm is general though this
+# instance is specific.
+#
+# In this case we extract only the audio files
+#
+# This script is meant to be used thus:
+# $ ia list -va hpr2450 | ./parse_ia_audio.awk
+# hpr2450.flac derivative
+# hpr2450.mp3 derivative
+# hpr2450.ogg derivative
+# hpr2450.opus original
+# hpr2450.spx original
+# hpr2450.wav original
+#
+#-------------------------------------------------------------------------------
+
+BEGIN {
+ FS = "\t"
+}
+
+#
+# Read the header line and collect the fields into an array such that a search
+# by field name returns the field number.
+#
+NR == 1 {
+ for (i = 1; i <= NF; i++) {
+ fld[$i] = i
+ }
+}
+
+#
+# Read the rest of the data, reporting only the lines relating to audio files
+# and print the fields 'name' and 'source'
+#
+NR > 1 && $(fld["name"]) ~ /[^.]\.(flac|mp3|ogg|opus|spx|wav)/ {
+ printf "%-15s %s\n",$(fld["name"]),$(fld["source"])
+}
+
The BEGIN rule defines the field delimiter as the TAB character.
+
The first rule runs only when the first record is encountered. This is the header with the names of the columns (fields). A for loop scans the fields which have been split up by awk’s usual record splitting. The fields are named $1, $2 etc. The variable i increments from 1 to however many fields there are in the record and stores the field numbers in the array fld indexed by the contents of the field.
The second rule is invoked if two conditions are met:
+
+
The record number is greater than 1
+
The field numbered whatever the header "name" returned (1 in the example) ends with one of flac, mp3, ogg, opus, spx, wav
+
+
This rule prints the fields indexed by the column names "name" and "source". The first comment in the script shows what this will look like.
+
Note the use of expressions like:
+
$(fld["name"])
+
Here awk will find the value stored in fld["name"] (1 in the example data) and will reference the field called $(1), which is another way of writing $1. The parentheses are necessary to remove ambiguity.
+
So, the script is just printing columns for certain selected lines, but is able to cope with the columns being in different positions at different times because it prints them "by name".
+
Most of the queries handled by the Internet Archive API return JSON-format results (not something that awk can easily parse), but for some reason this one returns a varying tab-delimited file. Still, awk was able to come to the rescue!
+
+
diff --git a/eps/hpr2526/hpr2526_full_shownotes.pdf b/eps/hpr2526/hpr2526_full_shownotes.pdf
new file mode 100755
index 0000000..4efb362
Binary files /dev/null and b/eps/hpr2526/hpr2526_full_shownotes.pdf differ
diff --git a/eps/hpr2544/hpr2544_full_shownotes.html b/eps/hpr2544/hpr2544_full_shownotes.html
new file mode 100755
index 0000000..cd21b71
--- /dev/null
+++ b/eps/hpr2544/hpr2544_full_shownotes.html
@@ -0,0 +1,205 @@
+
+
+
+
+
+
+
+ How I prepared episode 2493: YouTube Subscriptions - update (HPR Show 2544)
+
+
+
+
+
+
+
+
+
How I prepared episode 2493: YouTube Subscriptions - update (HPR Show 2544)
+
Dave Morriss
+
+
+
+
+
+
+
Table of Contents
+
+
+
Introduction
+
In show 2493 I listed a number of the YouTube channels I watch. Some of what I did to prepare the notes was to cut and paste information from YouTube pages, but the basic list itself was generated programmatically. I thought the process I used might be of interest to somebody so I am describing it here.
+
Components
+
I needed four components to achieve what I wanted:
The pandoc document converter tool to generate HTML
+
+
I will talk a little about the first three components in this episode in order to provide an overview.
+
YouTube subscription list
+
To find this go to the ‘Subscription Manager’ page of YouTube (https://www.youtube.com/subscription_manager) and select the ‘Manage Subscriptions’ tab. At the bottom of the page is an ‘Export’ option which generates OPML. By default this is written to a file called subscription_manager.
+
An OPML file is in XML format and is designed to be used by an application that processes RSS feeds such as a Podcatcher or a Video manager. For me it is a convenient format to parse in order to extract the basic channel information. I could not find any other way of doing this apart from scraping the YouTube website. If you know better please let me know in a comment or by submitting a show of your own.
+
Using xmlstarlet
+
This is a tool designed to parse XML files from the command line. I run Debian Testing and was able to install it from the repository.
+
There are other tools that could be used for parsing but xmlstarlet is the Swiss Army knife of such tools for analysing and parsing such data. The tool deserves a show to itself, or even a short series. I know that Ken Fallon (who uses it a lot) has expressed a desire to go into detail about it at some point.
+
I am just going to describe how I decided to generate a simple CSV file from the OPML and found out how to do so with xmlstarlet.
+
Finding the structure of the XML
+
I copied the subscription_manager file to yt_subs.opml as a more meaningful name.
+
I ran the following command against this file to find out its structure:
+
$ xmlstarlet el -u yt_subs.opml
+opml
+opml/body
+opml/body/outline
+opml/body/outline/outline
+
It is possible to work this out by looking at the XML but it’s all squashed together and is difficult to read. It can be reformatted as follows:
The program xmllint is part of the libxml2-utils package on Debian, which also requires libxml2.
+
I think the xmlstarlet output is easier to read and understand.
+
The XML contains attributes (such as the title) which you can ask xmlstarlet to report on:
+
$ xmlstarlet el -a yt_subs.opml | head -11
+opml
+opml/@version
+opml/body
+opml/body/outline
+opml/body/outline/@text
+opml/body/outline/@title
+opml/body/outline/outline
+opml/body/outline/outline/@text
+opml/body/outline/outline/@title
+opml/body/outline/outline/@type
+opml/body/outline/outline/@xmlUrl
+
Extracting data from the XML
+
So, the xmlstarlet command I came up with (after some trial and error) was as follows. I have broken the long pipeline into lines by adding backslashes and newlines so it’s slightly more readable, and in this example I have just shown the first 5 lines it generated. In actuality I wrote the output to a file called yt_data.csv:
There is an echo command and the xmlstarlet command enclosed in parentheses. This causes Bash to create a sub-process to run everything. In the process the echo command generates the column titles for the CSV as we’ll see later. The output of the entire process is written as a stream of lines so the header and data all go to the same place.
+
The xmlstarlet command takes a sub-command which in this case is sel which causes it to “Select data or query XML document(s)” (quoted from the manual page)
+
+
-t defines a template
+
-m precedes the XPATH expression to match (as part of the template). The XPATH expression here is /opml/body/outline/outline which targets each XML node which contains the attributes we want.
+
-s A:T:- @title defines sorting where A:T:- is the operation and @title is the XPATH expression to sort by
+
-v expression defines what is to be reported; in this case it’s the @title and @xmlUrl attributes, then two zeroes all separated by commas thereby making a line of CSV data
+
-n defines the XML file to be read
+
+
The entire sub-process is piped into head -5 which returns the first 5 lines. In the actual case the output is redirected to a file with > yt_data.csv
+
The reason for making four columns will become clear later, but in summary it’s so that I can mark lines in particular ways. The ‘seen’ column is for marking the channels I spoke about in an earlier episode (2202) so I didn’t include them again in this one, and the ‘skip’ column is for channels I didn’t want to include for various reasons.
+
+
Generating HTML with Template Toolkit
+
Template Toolkit is a template system. There are many of these for different programming languages and applications. I have been using this one for over 15 years and am very happy with its features and capabilities.
+
I currently use it when generating show notes for my HPR contributions, and it’s used in many of the scripts I use to perform tasks as an HPR Admin.
+
Installing Template Toolkit
+
The Template Toolkit (TT) is written in Perl so it’s necessary to have Perl installed on the machine it’s to be run on. This happens as a matter of course on most Linux and Unix-like operating systems. It is necessary to have a version of Perl later than 5.6.0 (I have 5.26.1 on Debian Testing).
+
The Toolkit can be installed from the CPAN (Comprehensive Perl Archive Network), but if you do not have your system configured to do this the alternative is shown below (method copied from the Template Toolkit site):
+
$ wget http://cpan.org/modules/by-module/Template/Template-Toolkit-2.26.tar.gz
+$ tar zxf Template-Toolkit-2.26.tar.gz
+$ cd Template-Toolkit-2.26
+$ perl Makefile.PL
+$ make
+$ make test
+$ sudo make install
+
These instructions are relative to the current version of Template Toolkit at the time of writing, version 2.26. The site mentioned above will refer to the latest version.
+
Making a template
+
Using the Template Toolkit is a big subject, and I will not go into great detail here. If there is any interest I will do an episode on it in the future.
+
The principle is that TT reads a template file containing directives in the TT syntax. Usually TT is called out of a script written in Perl (or Python – a new Python version has been released recently). The template can be passed data from the script, but it can also obtain data itself. I used this latter ability to process the CSV file.
+
TT directives are enclosed in [% and %] sequences. They provide features such as loops, variables, control statements and so forth.
+To make TT access the CSV data file I used a plugin that comes with the Template Toolkit package. This plugin is called Template::Plugin::Datafile. It is linked to the required data file with the following directive:
+
+[% USE name = datafile('file_path', delim = ',') %]
+
+
The plugin reads files with fields delimited by colons by default, but in this instance we redefine this to be a comma. The name variable is actually a list of hashes which gives access to the lines of the data.
+
The following example template shows TT being connected to the file we created earlier, with a loop which iterates through the list of hashes, generating output data.
+
[% USE ytlist = datafile('yt_data.csv', delim = ',') -%]
+- YouTube channels:
+[% FOREACH chan IN ytlist -%]
+[% NEXT IF chan.seen || chan.skip -%]
+ - [*[% chan.title %]*]([% chan.feed.replace('feeds/videos\.xml.channel_id=', 'channel/') %])
+[% END -%]
+
Note that the TT directives are interleaved with the information we want to write. The line ‘- YouTube channels:’ is an example of a Markdown list element.
+
This is followed by a FOREACH loop which iterates through the ytlist list, placing the current line in the hash variable chan. The loop is terminated with an END directive.
+
The NEXT directive causes the loop to skip a line of data if either the seen or skip column holds the value true (1). These fields are referenced as chan.seen and chan.skip meaning the elements of the hash chan. Before running this template I edited the list and set these values to control what was reported.
+
The line after NEXT is simply outputting the contents of the hash. It is turning the data into a Markdown sub-list. Because the URL in the OPML file contained the address of a feed, whereas we need a channel address, the replace function (actually a virtual method) performs the necessary editing.
+
The expression chan.feed.replace() shows the replace virtual method being applied to the field feed of the chan hash.
+
Running the template
+
Running the template is simply a matter of calling the tpage command on it, where this command is part of the Template Toolkit package:
+
$ tpage yt_template.tpl | head -5
+- YouTube channels:
+ - [*Anne of All Trades*](https://www.youtube.com/channel/UCCkFJmUgzrZdkeHl_qPItsA)
+ - [*bigclivedotcom*](https://www.youtube.com/channel/UCtM5z2gkrGRuWd0JQMx76qA)
+ - [*Computerphile*](https://www.youtube.com/channel/UC9-y-6csu5WGm29I7JiwpnA)
+ - [*David Waelder*](https://www.youtube.com/channel/UCcapFP3gxL1aJiC8RdwxqRA)
+
The output is Markdown and these lines are links. I only showed the first 5 lines generated. It is actually possible to pipe the output of tpage directly into pandoc to generate HTML as follows:
+
$ tpage hpr____/yt_template.tpl | pandoc -f markdown -t html5 | head -5
+<ul>
+<li>YouTube channels:
+<ul>
+<li><a href="https://www.youtube.com/channel/UCCkFJmUgzrZdkeHl_qPItsA"><em>Anne of All Trades</em></a></li>
+<li><a href="https://www.youtube.com/channel/UCtM5z2gkrGRuWd0JQMx76qA"><em>bigclivedotcom</em></a></li>
+
You can see the result of running this to generate the notes for show 2493 by looking at the Links section of the long notes on that show.
+
Conclusion
+
I guess I could be accused of overkill here. When creating the notes for show 2493 I actually did more than what I have described here because it made the slightly tedious process of building a list a bit more interesting than it would have been otherwise.
+
Also, should I ever wish to record another show updating my YouTube subscriptions I can do something similar to what I have done here, so it is not necessarily wasted effort.
+
Along the way I learnt about getting data out of YouTube and I learnt more about using xmlstarlet. I also learnt some new things about Template Toolkit.
+
Of course, I also contributed another episode to Hacker Public Radio!
+
You may not agree, but I think this whole process is cool (even though it might be described as over-engineered).
+
+
diff --git a/eps/hpr2544/hpr2544_yt_template.tpl b/eps/hpr2544/hpr2544_yt_template.tpl
new file mode 100755
index 0000000..9573d61
--- /dev/null
+++ b/eps/hpr2544/hpr2544_yt_template.tpl
@@ -0,0 +1,6 @@
+[% USE ytlist = datafile('yt_data.csv', delim = ',') -%]
+- YouTube channels:
+[% FOREACH chan IN ytlist -%]
+[% NEXT IF chan.seen || chan.skip -%]
+ - [*[% chan.title %]*]([% chan.feed.replace('feeds/videos\.xml.channel_id=', 'channel/') %])
+[% END -%]
diff --git a/eps/hpr2558/hpr2558_full_shownotes.html b/eps/hpr2558/hpr2558_full_shownotes.html
new file mode 100755
index 0000000..eb25b10
--- /dev/null
+++ b/eps/hpr2558/hpr2558_full_shownotes.html
@@ -0,0 +1,262 @@
+
+
+
+
+
+
+
+ Battling with English - part 1 (HPR Show 2558)
+
+
+
+
+
+
+
+
+
Battling with English - part 1 (HPR Show 2558)
+
Dave Morriss
+
+
+
+
+
+
+
Table of Contents
+
+
+
Introduction
+
This is the first episode of a series about the English language. In it I want to look at some of the problems people (including myself) have with it. I plan to do several episodes and I want to keep them short.
+
The English language is old and has changed – evolved – in many ways over the years. It has come from a multitude of sources, and it’s difficult to say what is correct in an absolute way.
+
For example, when I was at school we were taught that "nice" should not be used in written material. At that time it was becoming common to see phrases like "I had a nice time" meaning pleasant (in a bland sort of way). In my "Concise Oxford Dictionary" from 1976 the 6th definition, "agreeable" is marked "colloquialism", whereas today this is a common usage.
+
However, it’s easy to use the wrong word in the wrong context. You might choose one that sounds similar for example. You might also have problems with the spelling of a chosen word. Spelling in English is not always logical. You might also find yourself confused about the use of punctuation – the correct use of apostrophes can be challenging for example.
+
In this series I want to examine some of the problem areas and try to give you the means of remembering the right way.
+
Note: I’m not an authority on this stuff, but I have tried to teach myself not to make these mistakes over the years. I just wanted to share what I have learnt1 with some links to higher authorities.
+
Using the wrong word
+
Confusing 'then' and 'than'
+
I see this a lot, on the web, in emails and in texts. Here are the definitions (follow links for the full details):
meaning 1: (conjunction) introducing second member of comparison
+
+
example 1: "Am I taller than you or are you taller than me?"
+
+
example 2: "I talk about why used stuff is often better than new stuff"
+
+
+
+
meaning 2: (preposition/conjunction) in expressions introducing an exception or contrast
+
+
example 1: "Other than fish, John eats no meat"
+
+
example 2: "We do not filter the shows in any way other than to check if they are audible and not blatant attempts at spam"
+
+
+
+
meaning 3: (conjunction) in expressions indicating one thing happening immediately after another
+
+
example: "No sooner was the concrete poured than someone walked over it"
+
+
+
Examples of what you should never write
+
Example 1
+
+
I like to listen to jazz every now and than
+
+
This should be "now and then". It’s an idiom that means "occasionally" or "every so often".
+
Example 2
+
+
Wine is better then beer
+
+
This almost implies that you should drink wine and follow it with beer! It should be than because a comparison is being made between wine and beer.
+
+
Confusing 'there', 'their' and 'they're'
+
This one overlaps into a topic I want to look at in a later episode because one of the options contains an apostrophe. The confusion here seems to be that the three words sound pretty much the same!
+
Let’s start with definitions (follow links for the full details):
example: "I was just at my friends’ house. They’re busy redecorating"
+
+
+
Examples of what you should never write
+
Example 1
+
+
Look over their!
+
+
Look over their what?? This one should have used there.
+
Example 2
+
+
I climbed into the attic and they’re was a wasp’s nest their!
+
+
The wasp’s nest serious disturbed the writer’s grammar. It should have been "there was a wasp’s nest there" otherwise you would have to try and understand "and they are was" as well as the possessive "their", which make no sense at all.
+
+
Confusing 'tenet' and 'tenant'
+
I see and hear this all the time. I reckon it has actually become more common in the last few years.
+
Let’s define the words (follow links for the full details):
meaning 1: (noun) a person who occupies land or property rented from a landlord
+
+
example: "He used to rent some rooms over a shop, but he didn’t like being a tenant"
+
+
+
+
meaning 2: (verb) the act of occupying property as a tenant
+
+
example: "I used to tenant some rooms over a shop"
+
+
+
How to remember which is which? Grammar Girl suggests remembering that "tenant" is about where a person lives. It ends with "ant" and ants might also live there.
+
Examples of what you should never write
+
Example 1
+
+
The tenet of Wildfell Hall
+
+
This is not a novel by Anne Brontë! Reading it literally, "The belief of Wildfell Hall" doesn’t make much sense.
One thing I have learnt is that "learned" and "learnt" are both correct and mean the same. However, "learnt" is more common in the UK, whereas "learned" is used both in the UK and the USA.↩
+
Paraphrased from the Wikipedia article on the "Hacker Ethic"↩
+
+
+
+
+
+
+
diff --git a/eps/hpr2581/hpr2581_DIN_rail_fitting.png b/eps/hpr2581/hpr2581_DIN_rail_fitting.png
new file mode 100755
index 0000000..b1b5116
Binary files /dev/null and b/eps/hpr2581/hpr2581_DIN_rail_fitting.png differ
diff --git a/eps/hpr2581/hpr2581_almost_assembled.png b/eps/hpr2581/hpr2581_almost_assembled.png
new file mode 100755
index 0000000..eb0edac
Binary files /dev/null and b/eps/hpr2581/hpr2581_almost_assembled.png differ
diff --git a/eps/hpr2581/hpr2581_box_contents_1.png b/eps/hpr2581/hpr2581_box_contents_1.png
new file mode 100755
index 0000000..0087182
Binary files /dev/null and b/eps/hpr2581/hpr2581_box_contents_1.png differ
diff --git a/eps/hpr2581/hpr2581_box_contents_2.png b/eps/hpr2581/hpr2581_box_contents_2.png
new file mode 100755
index 0000000..4b0b935
Binary files /dev/null and b/eps/hpr2581/hpr2581_box_contents_2.png differ
diff --git a/eps/hpr2581/hpr2581_box_contents_3.png b/eps/hpr2581/hpr2581_box_contents_3.png
new file mode 100755
index 0000000..92bd647
Binary files /dev/null and b/eps/hpr2581/hpr2581_box_contents_3.png differ
diff --git a/eps/hpr2581/hpr2581_first_print.png b/eps/hpr2581/hpr2581_first_print.png
new file mode 100755
index 0000000..eea89b1
Binary files /dev/null and b/eps/hpr2581/hpr2581_first_print.png differ
diff --git a/eps/hpr2581/hpr2581_full_shownotes.html b/eps/hpr2581/hpr2581_full_shownotes.html
new file mode 100755
index 0000000..f7e050a
--- /dev/null
+++ b/eps/hpr2581/hpr2581_full_shownotes.html
@@ -0,0 +1,125 @@
+
+
+
+
+
+
+
+ My new 3D printer - impressions of the Creality Ender 3 (HPR Show 2581)
+
+
+
+
+
+
+
+
+
My new 3D printer - impressions of the Creality Ender 3 (HPR Show 2581)
+
Dave Morriss
+
+
+
+
+
+
+
Table of Contents
+
+
+
Introduction
+
I have been thinking of buying a 3D printer for a year or so. I had thought of getting a Prusa i3 MK3 in kit form, but although it’s cheaper than the built form this printer is not cheap, and I doubted my ability to build it. I was also unsure whether there was a real need for the capabilities of a 3D printer in my life, and whether such a purchase was justified.
+
I had noticed the Chinese Creality CR10 printer in the recent past, and wondered about buying one of these at about half the price of the Prusa. This is a good-sized printer which comes fully-assembled as I understand, and it has had many good reviews.
+
When the Creality Ender 3 was released in April 2018 for around half the price of the CR10 it looked worth the risk to see if I really needed a 3D printer. So I bought one (from Amazon) in June.
+
As I write this (2018-06-10) it’s been less than a week since it was delivered, so this is a very preliminary look at the printer.
+
The Creality Ender 3 printer
+
The Ender 3 is a Fused Deposition Modeling (FDM) Cartesian printer with a heated bed with a 220mm x 220mm x 250mm printable volume. It originates in China and arrives in a partially disassembled form. Assembly is not difficult and takes perhaps an hour or two.
+
Overview
+
The frame of the printer is made from aluminium extrusions and metal fittings which have a high-quality finish. Many other parts are made from metal, and a few from plastic, but there are no 3D-printed parts in it. The X and Y movements are controlled by stepper motors which drive toothed belts. The Z axis is controlled by a single threaded rod. The moving parts are supported by what seem to be hard rubber wheels which ride on the frame.
+
As can be seen from the pictures below, the printer is quite compact: there is a metal box containing the main controller under the print bed, with an LCD panel and control knob to the right of the unit. The power supply is attached to the frame behind the rightmost vertical extrusion.
+
The spool holder is mounted on top of the frame (in step with the compact layout) and the filament feed (extruder) consists of a motor driving a small gear and a pulley system to the left of the frame. Filament is fed through a Bowden tube to the moving “hot end”.
+
The printer will print from a micro SD card which can be inserted into a slot on the front of the control box. There is also a micro-USB connection for a PC or laptop.
+
Assembly
+
The printer comes well packed in a surprisingly small box.
+
+Boxed printer
+
A small amount of PLA filament is included and the enclosed instructions are largely pictorial.
+
Unpacking the box shows a number of pre-assembled parts. The bed, controller and extruder are already assembled for example.
+
+Contents of box, part 1
+
+Contents of box, part 2
+
Tools are included, and all of the bolts and screws are packaged in labelled plastic bags. There is an SD card with detailed assembly instructions, though we found the instruction leaflet was sufficient to allow us to build the printer.
+
Building did not take very long, though there were three of us to hold things and interpret the instructions.
+
+Almost assembled (Bowden tube not connected)
+
There are many assembly videos on YouTube, as well as blogs about the process, so I will not go into any more detail about this here.
+
Assembly issues and observations
+
+
The X axis belt was already fitted and well tensioned. However, fitting and tensioning the belt on the Y axis was difficult, and it was later found that the slack belt resulted in poor print quality
+
Feeding the filament through the extruder and Bowden tube for the first time was difficult. The filament should be cut diagonally, but it still snagged as it left the extruder. We found that temporarily disconnecting the Bowden tube allowed it to be fed in without trouble.
+
We didn’t have any problems inserting the microSD card but there are reports of it missing the slot and ending up inside the control box.
+
+
First print
+
There is a pre-sliced object on the SD card, but we didn’t use that.
+
We used Ultimaker Cura as a slicer in the first instance, optimising the parameters for rapid printing rather than precision.
+
+First print was a small “coin” my son had designed in Fusion 360
+
A 1Kg reel of PLA was delivered a few days after the printer, and we started printing items like a DIN Rail fitting for my Raspberry Pi 3B+. The quality was good, even though we still had work to do to optimise everything.
+
+DIN Rail fitting for a RPi 3B+
+
Usage issues and observations
+
+
Levelling the bed is a bit tedious, but it has to be parallel to the Y axis. There are four levelling knobs under the corners of the bed. It is necessary to lower the hot end to a corner of the bed with enough space under it to insert a sheet of paper. The nozzle should gently touch this but not impede its movement as it is slid back and forth. This check and adjustment then needs to be carried out on each corner. Moving the nozzle around the bed is done by using the LCD control panel.
+I have seen one review of this printer on YouTube where the reviewer ran a procedure that automatically positions the nozzle at each of the four corners. This feature is not available in the firmware on my printer but I have found a GCode file on Thingiverse which performs this action.
+
Adhesion to the bed was a problem. When printing with the supplied filament it was too strong and removal of the object from the bed was difficult. Later (with new filament) things kept detaching. We found that a glue stick applied to the bed was needed to make the item adhere. The optimisation of this stage is ongoing. Better bed levelling seems to be helping significantly here.
+
Problems with the bed being warped have been reported for this printer. There is no very obvious warping on mine, though we have not yet checked it thoroughly. Some users are adding a glass top to the bed, and various add-on adhesive surfaces are available. This is something we are looking into.
+
+
Conclusion
+
There are upgrades for the printer. Belt tensioners have been made available for the CR10, and these are almost compatible with the Ender 3 - a situation which is likely to change soon. Also, there is an attachment for the hot end which helps direct air from the fan onto the nozzle. Alternative firmware is also available which can be flashed onto the controller. See the Thingiverse link for more information about printable enhancements.
+
This is a great little printer at an amazing price. There are a few issues with it, as mentioned, but overall, in our experience so far, it’s a really good printer for a beginner.
+
+
diff --git a/eps/hpr2596/hpr2596_full_shownotes.html b/eps/hpr2596/hpr2596_full_shownotes.html
new file mode 100755
index 0000000..109d2c2
--- /dev/null
+++ b/eps/hpr2596/hpr2596_full_shownotes.html
@@ -0,0 +1,324 @@
+
+
+
+
+
+
+
+ Battling with English - part 2 (HPR Show 2596)
+
+
+
+
+
+
+
+
+
Battling with English - part 2 (HPR Show 2596)
+
Dave Morriss
+
+
+
+
+
+
+
Table of Contents
+
+
+
Further notes about 'then' and 'than'
+
In the last episode I mentioned the confusion between then and than. I referred to the etymology of the two words, but I didn’t go into detail.
+
Reading the Online Etymology Dictionary, one interesting point in the page about than is that it was:
+
+
Developed from the adverb then, and not distinguished from it by spelling until c. 1700.
+
+
So, it would seem that the two words are related and historically were the same! However, I’d guess that it is unlikely that people using them interchangeably now are making reference to usage in the 1700’s.
+
+
Problems with apostrophes
+
Let us now examine the apostrophe, which is a punctuation mark. It is used for:
+
+
Indicating that letters have been omitted, such as in a contracted form of words. For example when the phrase they are is contracted to they’re.
+
Turning a word into a possessive form such as in the cat’s paw
+
When the plural of a single letter (or digit) is required such as in dot your i’s and cross your t’s.
+
+
There are other uses but you can look at the Wikipedia article for them if you want to dig deeper. I may well revisit this topic in a later show in this series.
+
Apostrophes in contractions
+
The term contraction describes the written form of a shortened word. In linguistics the terms used for this process are elision and deletion, meaning the omission of one or more sounds. This is usually done to make words easier to pronounce as we’ll see in the following examples.
+
In English there are many cases where the apostrophe is used to signal that letters have been omitted. Some examples are:
+
+
+
+
Long form
+
→
+
Contracted form
+
+
+
+
+
cannot
+
→
+
can’t
+
+
+
I am
+
→
+
I’m
+
+
+
you are
+
→
+
you’re
+
+
+
is not
+
→
+
isn’t
+
+
+
let us
+
→
+
let’s
+
+
+
it is
+
→
+
it’s
+
+
+
+
The apostrophe in can’t indicates that it is a contraction and not a word of its own. Had it been written as cant then that would have been an entirely different word (cant means hypocritical and sanctimonious talk).
+
The same sort of argument can be made for other cases.
+
Apostrophes in possessives
+
If you look at the linguistic arguments in the Wikipedia article you will see that this particular use of the apostrophe is wider than just the possessive usage, but we will not go into too much detail here.
+
There tends to be a lot of confusion about this use of the apostrophe, which we will look at in this episode.
+
Plural forms of words
+
Because there seems to be a lot of confusion in regard to possessives and plurals it seems a good idea to consider this subject first. Plurals are often (but not always) formed by adding an ‘s’ (or 'es') to the end of a word. These words do not take apostrophes:
+
+
+
+
Singular
+
→
+
Plural
+
Ending
+
+
+
+
+
cat
+
→
+
cats
+
s
+
+
+
crocodile
+
→
+
crocodiles
+
s
+
+
+
programmer
+
→
+
programmers
+
s
+
+
+
sandwich
+
→
+
sandwiches
+
es
+
+
+
volcano
+
→
+
volcanoes
+
es
+
+
+
+
Of course, there are other plurals in English. The plural of child is not childs but children. The plural of amoeba is not strictly amoebas but amoebae, because the word has a Latin origin. (However, amoebas is gaining acceptance, though it was not acceptable during my education.)
+
Possessive forms of words
+
This seems to be one of the main issues that puzzles some people. There is a difference between cats and cat’s. You would write:
+
+
I have two cats ✔
+
+
Meaning you have two feline friends. You would not write:
+
+
I have two cat’s ✖
+
+
This is an incomplete sentence. You are saying you have two things belonging to cats (possessed by them) but you haven’t said what the things are, so the sentence makes no sense. The word cat’s here is not the plural of cat.
+
You could write:
+
+
This is my cat’s basket ✔
+
+
Which means the basket belonging to your cat.
+
+
+
+
Word
+
→
+
Singular Possessive
+
Example
+
+
+
+
+
cat
+
→
+
cat’s
+
This cat’s fur is black
+
+
+
crocodile
+
→
+
crocodile’s
+
A crocodile’s teeth can regenerate many times
+
+
+
programmer
+
→
+
programmer’s
+
A programmer’s life is a hard one
+
+
+
sandwich
+
→
+
sandwich’s
+
This is my sandwich’s filling
+
+
+
+
Possessive forms of plural words
+
What if you want to express the idea of possession by many things? To write about a toy owned by several cats you’d write something like:
+
+
The cats’ catnip mouse makes a sound when moved. ✔
+
+
Other examples:
+
+
The boys’ bedrooms are down this way. ✔
+
+
Things get a little more complex when the plural is not formed by just adding an s, such as in man → men:
+
+
Follow the signs to the men’s changing rooms ✔
+
+
This is because men doesn’t end with an s whereas boys does.
Historically English added es to a noun to show possession.
+
+
The toy of a single dog would have been a doges toy
+
The toys of a single dog would have been a doges toys
+
The toy of multiple dogs would have been the dogses toy
+
The mother of multiple children would have been the childrenes mother
+
The emblem of the country Wales would have been Waleses emblem
+
+
Over time the e in es was replaced by an apostrophe, and if that left “s’s” at the end of a word the last s was removed
+
+
The toy of a single dog became a dog’s toy
+
The toys of a single dog became a dog’s toys
+
The toy of multiple dogs became the dogs’ toy
+
The mother of multiple children became the children’s mother
+
The emblem of the country Wales became Wales’ emblem
+
+
+
This explanation helped me, so I hope it helps you too.
+
Use of apostrophes with single letters and digits
+
As mentioned already the apostrophe is used when a plural form of a single letter or digit is required.
+
This is a little confusing but makes sense when you think about it. As Grammar Girl puts it:
+
+
The apostrophe is especially important when you are writing about a’s, i’s, and u’s because without the apostrophe readers could easily think you are writing the words as, is, and us.
+
+
The one that catches everyone at some time
+
Given what we’ve seen so far, you might expect that the word it when made possessive would become it’s, but that is wrong! In fact, the word it’s is an abbreviation for it is.
+
Examples:
+
+
It’s very warm in Scotland at the moment. ✔
+
+
Here it’s means it is.
+
+
It’s been interesting researching this topic. ✔
+
+
Short for it has.
+
+
The horse stamped its feet, shook its head, and neighed. ✔
+
+
Two possessives: its feet, its head.
+
Why?
+
The its/it’s issue has evolved to be that way.
+
It is likely that this has happened to make the possessive its conform to some of the other possessives like yours, his, hers, ours, theirs, and whose.
+
Remember: it’s always means it is or it has.
+
+
Examples of what you should never write:
+
+
Apple’s £2.00 per Kilo ✖
+
+
This is a case of the so-called Greengrocer’s Apostrophes (also Greengrocers’ Apostrophes - I hope you now understand why either way of writing the name is acceptable!)
+
This mistake is astonishingly common; in some cases writers seem to assume all words that end in ‘s’ should have an apostrophe. Don’t add an apostrophe to a word just because the word ends with the letter s!
+
+
These banana’s are overripe, we should have bought fewer ✖
+
+
Here we need the plural form bananas, not the possessive form.
I like pig’s. Dog’s look up to us. Cat’s look down on us. Pig’s treat us as equal’s. ✖
+
+
Every plural in this example has been written incorrectly as a possessive with an apostrophe. See the site for many more!
+
+
Future episodes on apostrophes
+
I have tried to make this episode as straightforward as I can. However, there are other factors such as what the various writing style guides say, and how some of the edge conditions are handled, that complicate matters.
+
If it seems like a good idea I may go into more detail in a later episode of this series. The Wikipedia reference below covers many of these topics, so you may wish to refer to it for more information. Many of the other references also explain things in very helpful ways.
+
If there are other aspects of the apostrophe subject that you would like me to look at in the future please let me know.
+
+
diff --git a/eps/hpr2610/hpr2610_awk12_ex1.awk b/eps/hpr2610/hpr2610_awk12_ex1.awk
new file mode 100755
index 0000000..e2f02d7
--- /dev/null
+++ b/eps/hpr2610/hpr2610_awk12_ex1.awk
@@ -0,0 +1,6 @@
+{
+ patsplit($0,a,/[^,]*/)
+ for (i in a)
+ printf "%s ",a[i]
+ print ""
+}
diff --git a/eps/hpr2610/hpr2610_awk12_ex10.awk b/eps/hpr2610/hpr2610_awk12_ex10.awk
new file mode 100755
index 0000000..0fd7998
--- /dev/null
+++ b/eps/hpr2610/hpr2610_awk12_ex10.awk
@@ -0,0 +1,23 @@
+#!/usr/bin/awk -f
+
+#
+# Sort the indices as strings in ascending order
+#
+BEGIN{
+ PROCINFO["sorted_in"]="@ind_str_asc"
+}
+
+#
+# Make a frequency table of the first letter of each word
+#
+{
+ freq[substr($1,1,1)]++
+}
+
+#
+# Print the results in the frequency table
+#
+END{
+ for (i in freq)
+ printf "%s: %d\n",i,freq[i]
+}
diff --git a/eps/hpr2610/hpr2610_awk12_ex2.awk b/eps/hpr2610/hpr2610_awk12_ex2.awk
new file mode 100755
index 0000000..f85b908
--- /dev/null
+++ b/eps/hpr2610/hpr2610_awk12_ex2.awk
@@ -0,0 +1,6 @@
+{
+ patsplit($0,a,/([^,]*)|("[^"]+")/)
+ for (i in a)
+ printf "<%s> ",a[i]
+ print ""
+}
diff --git a/eps/hpr2610/hpr2610_awk12_ex3.awk b/eps/hpr2610/hpr2610_awk12_ex3.awk
new file mode 100755
index 0000000..3ab5ec0
--- /dev/null
+++ b/eps/hpr2610/hpr2610_awk12_ex3.awk
@@ -0,0 +1,9 @@
+{
+ flds = patsplit($0,a,/[A-Za-z]+/,s)
+ for (i in a)
+ printf "%s ",a[i]
+ print ""
+ for (i=1; i<=flds; i++)
+ printf "%s ",s[i]
+ print ""
+}
diff --git a/eps/hpr2610/hpr2610_awk12_ex4.awk b/eps/hpr2610/hpr2610_awk12_ex4.awk
new file mode 100755
index 0000000..dd092ad
--- /dev/null
+++ b/eps/hpr2610/hpr2610_awk12_ex4.awk
@@ -0,0 +1,8 @@
+BEGIN{
+ PROCINFO["sorted_in"]="@val_str_asc"
+}
+{
+ split($0,a," ")
+ for (i in a)
+ printf "%d: %s\n",i,a[i]
+}
diff --git a/eps/hpr2610/hpr2610_awk12_ex5.awk b/eps/hpr2610/hpr2610_awk12_ex5.awk
new file mode 100755
index 0000000..0e7dba8
--- /dev/null
+++ b/eps/hpr2610/hpr2610_awk12_ex5.awk
@@ -0,0 +1,8 @@
+BEGIN{
+ a[1]="Jones"
+ a[2]="X"
+ a[3]="Smith"
+ asort(a)
+ for (i in a)
+ printf "%s %s\n",i,a[i]
+}
diff --git a/eps/hpr2610/hpr2610_awk12_ex6.awk b/eps/hpr2610/hpr2610_awk12_ex6.awk
new file mode 100755
index 0000000..a09fe00
--- /dev/null
+++ b/eps/hpr2610/hpr2610_awk12_ex6.awk
@@ -0,0 +1,11 @@
+BEGIN{
+ a["a"]="Jones"
+ a["b"]="X"
+ a["c"]="Smith"
+ asort(a,b)
+ for (i in b)
+ printf "b[%s] = %s\n",i,b[i]
+ print ""
+ for (i in a)
+ printf "a[%s] = %s\n",i,a[i]
+}
diff --git a/eps/hpr2610/hpr2610_awk12_ex7.awk b/eps/hpr2610/hpr2610_awk12_ex7.awk
new file mode 100755
index 0000000..a3a2bb0
--- /dev/null
+++ b/eps/hpr2610/hpr2610_awk12_ex7.awk
@@ -0,0 +1,8 @@
+BEGIN{
+ a["third"]="Jones"
+ a["second"]="X"
+ a["first"]="Smith"
+ asorti(a)
+ for (i in a)
+ printf "%s %s\n",i,a[i]
+}
diff --git a/eps/hpr2610/hpr2610_awk12_ex8.awk b/eps/hpr2610/hpr2610_awk12_ex8.awk
new file mode 100755
index 0000000..4fe91e1
--- /dev/null
+++ b/eps/hpr2610/hpr2610_awk12_ex8.awk
@@ -0,0 +1,20 @@
+BEGIN{
+ a["third"]="Jones"
+ a["second"]="X"
+ a["first"]="Smith"
+ asorti(a,b)
+
+ print "What array a contains:"
+ for (i in a)
+ printf "a[%s] = %s\n",i,a[i]
+ print ""
+
+ print "What array b contains:"
+ for (i in b)
+ printf "b[%s] = %s\n",i,b[i]
+ print ""
+
+ print "Accessing original array a with sorted indices in b"
+ for (i in b)
+ printf "%6s: %s\n",b[i],a[b[i]]
+}
diff --git a/eps/hpr2610/hpr2610_awk12_ex9.awk b/eps/hpr2610/hpr2610_awk12_ex9.awk
new file mode 100755
index 0000000..aff8ec8
--- /dev/null
+++ b/eps/hpr2610/hpr2610_awk12_ex9.awk
@@ -0,0 +1,8 @@
+BEGIN{
+ a["a"]="Jones"
+ a["b"]="X"
+ a["c"]="Smith"
+ asort(a,b,"@val_str_desc")
+ for (i in b)
+ printf "%s %s\n",i,b[i]
+}
diff --git a/eps/hpr2610/hpr2610_awk12_extra.awk b/eps/hpr2610/hpr2610_awk12_extra.awk
new file mode 100755
index 0000000..6f23d10
--- /dev/null
+++ b/eps/hpr2610/hpr2610_awk12_extra.awk
@@ -0,0 +1,14 @@
+#!/usr/bin/awk -f
+#
+# Awk script to take a sequence of words separated by spaces and turn them
+# into a string where each word is followed by as many hyphens as there are
+# letters in the word itself.
+#
+{
+ for (i=1; i<=NF; i++){
+ fill=$i
+ gsub(/./,"-",fill)
+ printf "%s%s",$i,fill
+ }
+ print ""
+}
diff --git a/eps/hpr2610/hpr2610_full_shownotes.epub b/eps/hpr2610/hpr2610_full_shownotes.epub
new file mode 100755
index 0000000..11daa4a
Binary files /dev/null and b/eps/hpr2610/hpr2610_full_shownotes.epub differ
diff --git a/eps/hpr2610/hpr2610_full_shownotes.html b/eps/hpr2610/hpr2610_full_shownotes.html
new file mode 100755
index 0000000..4b61dac
--- /dev/null
+++ b/eps/hpr2610/hpr2610_full_shownotes.html
@@ -0,0 +1,518 @@
+
+
+
+
+
+
+
+ Gnu Awk - Part 12 (HPR Show 2610)
+
+
+
+
+
+
+
+
+
+
Gnu Awk - Part 12 (HPR Show 2610)
+
Dave Morriss
+
+
+
+
+
+
+
Table of Contents
+
+
+
Introduction
+
This is the twelfth episode of the “Learning Awk” series which is being produced by Mr. Young and myself.
+
In this episode I want to continue with the subject I started in episode 10, an advanced-level look at arrays in Awk.
+
In case it might be of interest I have also included a section describing a recent use I made of awk to solve a problem, though this does not use arrays.
+
More about arrays in Awk
+
Using patsplit
+
We saw the split function in episode 10, but there is also a more powerful function for splitting strings into array elements called patsplit.
+
+
patsplit(string, array [, fieldpat [, seps ] ])
+
Divide string into pieces defined by fieldpat and store the pieces in array and the separator strings in the seps array.
+This is the same as split in episode 10; consult this episode for the details of this type of string splitting. The main difference from split is that the third argument, fieldpat, is a regular expression which defines the field rather than the separator.
+
+
+
Examples
+
1. Using patsplit to split a comma-delimited string. This could just as well have been done by setting the FS variable and using awk’s standard splitting mechanism (or FPAT which has not been covered in this series so far):
+
$ cat awk12_ex1.awk
+{
+ patsplit($0,a,/[^,]*/)
+ for (i in a)
+ printf "%s ",a[i]
+ print ""
+}
+$ x="An apple a day keeps the doctor away"
+$ echo "${x// /,}"
+An,apple,a,day,keeps,the,doctor,away
+$ echo "${x// /,}" | awk -f awk12_ex1.awk
+An apple a day keeps the doctor away
+
Note that the fieldpat argument is not the delimiter, but a definition of the field structure itself. Here the regexp specifies a sequence of zero or more characters which are not commas.
+
Note also that Bash variable 'x' is set to a string, then this is edited to replace spaces by commas and fed to the awk script - which removes them again!
+
2. Another example using a more complex regular expression:
+
$ cat awk12_ex2.awk
+{
+ patsplit($0,a,/([^,]*)|("[^"]+")/)
+ for (i in a)
+ printf "<%s> ",a[i]
+ print ""
+}
+$ echo "A,\"red bird\",in,the,hand,is,worth,two,in,the,bush" | awk -f awk12_ex2.awk
+<A> <"red bird"> <in> <the> <hand> <is> <worth> <two> <in> <the> <bush>
+
This regexp handles data which is more like the standard CSV format:
+
([^,]*)|("[^"]+")
+
+
The first sub-expression deals with a series of zero or more not commas.
+
The second one looks for a double-quoted string containing one or more not double quote characters. The CSV standard requires elements with embedded spaces to be quoted.
+
+
3. Showing what happens to the separators:
+
$ cat awk12_ex3.awk
+{
+ flds = patsplit($0,a,/[A-Za-z]+/,s)
+ for (i in a)
+ printf "%s ",a[i]
+ print ""
+ for (i=1; i<=flds; i++)
+ printf "%s ",s[i]
+ print ""
+}
+$ echo "Grinning--------like----a-Cheshire--------cat---" | awk -f awk12_ex3.awk
+Grinning like a Cheshire cat
+-------- ---- - -------- ---
+
In this example the number of fields is stored in flds. The regexp used to define the fields is a sequence of one or more letters. These are printed in a loop as before.
+
The separators are printed in a loop which counts from 1 to the number of fields, and these elements are shown. There is also an element zero because patsplit saves the separator which precedes the first field, but this is empty and we don’t print it here.
+
Skip unless really interested
+
The data sent to this example was generated by an awk script which is shown below and is available in the downloadable file awk12_extra.awk. Note that this one has been made into a standalone script by the addition of the #! line at the start (and has been made executable):
+
$ cat awk12_extra.awk
+#!/usr/bin/awk -f
+#
+# Awk script to take a sequence of words separated by spaces and turn them
+# into a string where each word is followed by as many hyphens as there are
+# letters in the word itself.
+#
+{
+ for (i=1; i<=NF; i++){
+ fill=$i
+ gsub(/./,"-",fill)
+ printf "%s%s",$i,fill
+ }
+ print ""
+}
+$ echo "Grinning like a Cheshire cat" | ./awk12_extra.awk
+Grinning--------like----a-Cheshire--------cat---
+
+
Sorting arrays
+
Using PROCINFO
+
In standard awk, the order in which the elements of an array are returned is not defined and it’s necessary to go to some trouble to order them in a specific way.
+
Gnu Awk (gawk) lets you control the order in which the array elements are returned by use of a special built-in array called PROCINFO.
+
Setting PROCINFO["sorted_in"] to one of a set of predefined values allows array sorting. The values are:
+
+
+
+
+
+
+
+
+
Value
+
Effect
+
+
+
+
+
"@unsorted"
+
Array elements are unsorted as in standard awk
+
+
+
"@ind_str_asc"
+
Order by indices in ascending order compared as strings
+
+
+
"@ind_str_desc"
+
Order by indices in descending order compared as strings
+
+
+
"@ind_num_asc"
+
Order by indices in ascending order forcing them to be treated as numbers
+
+
+
"@ind_num_desc"
+
Order by indices in descending order forcing them to be treated as numbers
+
+
+
"@val_type_asc"
+
Order by element values in ascending order. Ordering is by the type assigned to the element
+
+
+
"@val_type_desc"
+
Order by element values in descending order. Ordering is by the type assigned to the element
+
+
+
"@val_str_asc"
+
Order by element values in ascending order. Scalar values are compared as strings.
+
+
+
"@val_str_desc"
+
Order by element values in descending order. Scalar values are compared as strings.
+
+
+
"@val_num_asc"
+
Order by element values in ascending order. Scalar values are compared as numbers.
+
+
+
"@val_num_desc"
+
Order by element values in descending order. Scalar values are compared as numbers.
+
+
+
+
+
Caveats:
+
+
The sort order is determined before the loop begins and cannot be changed inside it.
+
The value of PROCINFO["sorted_in"] is effective throughout the script and affects all array-scanning loops; it is not localised.
+
+
This feature of GNU Awk is more complicated than has been described here. For example, arrays can be more complex than we have seen so far, and PROCINFO["sorted_in"] can also be used to call a user-defined function for sorting. The full details are available in the GNU Awk Manual, starting with section 8.1.6.
+
Examples
+
1. Sorting an array by its values:
+
$ cat awk12_ex4.awk
+BEGIN{
+ PROCINFO["sorted_in"]="@val_str_asc"
+}
+{
+ split($0,a," ")
+ for (i in a)
+ printf "%d: %s\n",i,a[i]
+}
+$ echo "An Englishman's home is his castle" | awk -f awk12_ex4.awk
+1: An
+2: Englishman's
+6: castle
+5: his
+3: home
+4: is
+
Here the array is populated using split. The setting of PROCINFO["sorted_in"] has requested sorting by element values in ascending order (in the BEGIN rule). The array is printed showing the indices and values and you can see that the order is as requested. Note that the words with capitals sort before the lowercase ones.
+
Addendum: I have included another example of the use of PROCINFOlater in the notes. Since the audio has already been recorded I have named the example awk12_ex10.awk to avoid changing other file names.
+
Using Awk’s Array Sorting Functions
+
As mentioned in episode 11, there are two functions for sorting arrays in GNU Awk: asort and asorti.
+
+
asort(source [, dest [, how ] ])
+
Returns the number of elements in the array source.
+Sorts the values of source and replaces the indices of the sorted values of source with sequential integers starting with one.
+If the optional array dest is specified, then source is duplicated into dest. dest is then sorted, leaving the array source unchanged.
+The third argument how specifies how the array is to be sorted.
+
+
asorti(source [, dest [, how ] ])
+
Returns the number of elements in the array source.
+Sorts the indices of source instead of the values.
+If the optional array dest is specified, then source is duplicated into dest. dest is then sorted, leaving the array source unchanged.
+The third argument how specifies how the array is to be sorted.
+
+
+
In both cases the optional how argument defines the type of sorting. This must be one of the strings already defined: "@ind_str_asc" to "@val_num_desc". It can also be, as mentioned above, the name of a user-defined function. We have not looked at user-defined functions yet, so we will leave this option for the moment.
+
Examples
+
1. Sorting an array with numeric indexes with asort reorders the indices:
+
$ cat awk12_ex5.awk
+BEGIN{
+ a[1]="Jones"
+ a[2]="X"
+ a[3]="Smith"
+ asort(a)
+ for (i in a)
+ printf "%s %s\n",i,a[i]
+}
+$ awk -f awk12_ex5.awk
+1 Jones
+2 Smith
+3 X
+
Note that the indices have been destroyed and replaced with 1, 2 and 3, in this case in a different order from their original values.
+
2. Sorting an array with character indices using asort, showing that providing a destination array is a way to avoid affecting the original:
+
$ cat awk12_ex6.awk
+BEGIN{
+ a["a"]="Jones"
+ a["b"]="X"
+ a["c"]="Smith"
+ asort(a,b)
+ for (i in b)
+ printf "b[%s] = %s\n",i,b[i]
+ print ""
+ for (i in a)
+ printf "a[%s] = %s\n",i,a[i]
+}
+$ awk -f awk12_ex6.awk
+b[1] = Jones
+b[2] = Smith
+b[3] = X
+
+a[a] = Jones
+a[b] = X
+a[c] = Smith
+
This again shows the sorted array 'b' has had its indices replaced by the numbers 1, 2 and 3, so if these were important it might be a problem.
+
3. Sorting an array with string indices using asorti rebuilds the array with just the indexes, which is usually not useful on its own:
+
$ cat awk12_ex7.awk
+BEGIN{
+ a["third"]="Jones"
+ a["second"]="X"
+ a["first"]="Smith"
+ asorti(a)
+ for (i in a)
+ printf "%s %s\n",i,a[i]
+}
+$ awk -f awk12_ex7.awk
+1 first
+2 second
+3 third
+
In this case the contents of the array 'a' have been destroyed, making the indices the contents and adding numeric indices.
+
4. Sorting an array with string indices using asorti but using the dest argument results in an array that can be used to access the original array in sorted order without changing it:
+
$ cat awk12_ex8.awk
+BEGIN{
+ a["third"]="Jones"
+ a["second"]="X"
+ a["first"]="Smith"
+ asorti(a,b)
+
+ print "What array a contains:"
+ for (i in a)
+ printf "a[%s] = %s\n",i,a[i]
+ print ""
+
+ print "What array b contains:"
+ for (i in b)
+ printf "b[%s] = %s\n",i,b[i]
+ print ""
+
+ print "Accessing original array a with sorted indices in b"
+ for (i in b)
+ printf "%6s: %s\n",b[i],a[b[i]]
+}
+$ awk -f awk12_ex8.awk
+What array a contains:
+a[first] = Smith
+a[third] = Jones
+a[second] = X
+
+What array b contains:
+b[1] = first
+b[2] = second
+b[3] = third
+
+Accessing original array a with sorted indices in b
+ first: Smith
+second: X
+ third: Jones
+
Note: Since the audio explanation of this example was a bit vague I have enhanced the example to (hopefully) make it more understandable.
+
5. Sorting an array with character indices using asort but requesting a sort type "@val_str_desc" - descending order of element values:
+
$ cat awk12_ex9.awk
+BEGIN{
+ a["a"]="Jones"
+ a["b"]="X"
+ a["c"]="Smith"
+ asort(a,b,"@val_str_desc")
+ for (i in b)
+ printf "%s %s\n",i,b[i]
+}
+$ awk -f awk12_ex9.awk
+1 X
+2 Smith
+3 Jones
+
Extra example
+
1. Another PROCINFO example which counts the initial letters of words in a dictionary:
+
$ cat awk12_ex10.awk
+#!/usr/bin/awk -f
+
+#
+# Sort the indices as strings in ascending order
+#
+BEGIN{
+ PROCINFO["sorted_in"]="@ind_str_asc"
+}
+
+#
+# Make a frequency table of the first letter of each word
+#
+{
+ freq[substr($1,1,1)]++
+}
+
+#
+# Print the results in the frequency table
+#
+END{
+ for (i in freq)
+ printf "%s: %d\n",i,freq[i]
+}
+$ ./awk12_ex10.awk /usr/share/dict/words
+A: 1412
+B: 1462
+C: 1592
+D: 828
+E: 641
+F: 529
+G: 834
+H: 916
+I: 350
+J: 558
+K: 659
+...
+
In this example I have made the script executable and have added a hash bang line to define it as an Awk script. Don’t forget the '-f' at the end of that extra line.
+
In this example the dictionary file /usr/share/dict/words is scanned. Each line contains a single word and the script takes the first letter of this word and uses it as an index to the array freq. This element is incremented by 1 resulting in the accumulation of the frequencies of these initial letters. The frequency table is printed in the END rule but because a sort order has been defined in the BEGIN rule the elements appear in ascending order of the index.
+
Yet more about arrays
+
There is more to be said about arrays in Gnu Awk. It is possible to have multi-dimensional arrays (of a sort) and to have arrays as array elements too (a GNU extension).
+
We probably will not be covering these further topics in this series, though there is plenty of information in the GNU Awk manual if you want to dig deeper.
+
Of course, if we receive a request to cover this area in more depth then we will reconsider!
+
+
Real-world Awk example
+
One of the things I do for HPR is to process the show notes sent in with episodes, many of which are plain text. Since we need HTML for loading into the HPR database I run these through an editor and a series of scripts to turn them into Markdown, and then generate HTML from them. I do this on my workstation after grabbing a copy of the notes from the HPR server.
+
In order to check that the generated HTML looks OK I make a local copy of it, which can be viewed with a browser, and I use a tool called pandoc to make this version. This tool turns Markdown into HTML (amongst other document conversion tasks), but lately some of its requirements have changed necessitating a change to my workflow.
+
To make the HTML copy I want for local viewing pandoc needs some additional information. The information takes the form of two delimited lines in YAML format, such as:
The first line is the invocation of awk. Note that the argument to the -f option is '-', which means the standard input channel. This is catered for by the Bash heredoc which is everything from "<<'ENDAWK'" to the last line in the example. This is Bash’s way of embedding data in a script without having to put it in a string and risk all the issues that can ensue with string delimiters.
+
The character string (ENDAWK) used in the heredoc to enclose the information to be offered to awk on standard input is chosen by the user, but it must be unique within the Bash script. Enclosing the first instance in single quotes turns off the Bash parameter substitution within the enclosed document - so '$0' in this example would have been seen and interpreted by Bash as a shell variable if this had not been done.
+
The data file being processed by awk is a file containing the output of the show submission form, the name of which is in the RAWFILE variable. The output from awk is written to a temporary file, the name of which is in the variable TMP1.
+
The awk script itself writes the necessary three hyphens in the BEGIN rule (line 2) and the final three fullstops in the END rule (line 13).
+
There are two regular expression matching rules. One matches ^Title: which precedes the show title in the input file. The other matches ^Host_Name: which labels the line containing the name of the host.
+
In both cases these labels, with the trailing white space (often a Tab) are deleted using the sub function (lines 4 and 9).
+
Because the resulting strings might contain quotes, a gsub call is used to ensure that any quotes are doubled using gsub (lines 5 and 10).
+
Finally the two strings are written out with the required labels for pandoc, using single quotes to enclose each of them (lines 6 and 11).
+
The resulting file of YAML-format metadata is read by pandoc before the file of notes for the show.
+
+
diff --git a/eps/hpr2639/hpr2639_bash9_ex1.sh b/eps/hpr2639/hpr2639_bash9_ex1.sh
new file mode 100755
index 0000000..cb3b4ab
--- /dev/null
+++ b/eps/hpr2639/hpr2639_bash9_ex1.sh
@@ -0,0 +1,16 @@
+#!/bin/bash
+
+#
+# Demonstration of the arithmetic expressions as tests
+#
+
+for i in {-3..3}; do
+ echo -n "$i: "
+ if ((i)); then
+ echo "$? true"
+ else
+ echo "$? false"
+ fi
+done
+
+exit
diff --git a/eps/hpr2639/hpr2639_full_shownotes.html b/eps/hpr2639/hpr2639_full_shownotes.html
new file mode 100755
index 0000000..9329b10
--- /dev/null
+++ b/eps/hpr2639/hpr2639_full_shownotes.html
@@ -0,0 +1,196 @@
+
+
+
+
+
+
+
+ Some ancillary Bash tips - 9 (HPR Show 2639)
+
+
+
+
+
+
+
+
+
Some ancillary Bash tips - 9 (HPR Show 2639)
+
Dave Morriss
+
+
+
+
+
+
+
Table of Contents
+
+
+
Making decisions in Bash
+
This is my ninth contribution to the Bash Scripting series under the heading of Bash Tips. The previous episodes are listed below in the Links section.
+
It seems to me that it would be worthwhile looking at how Bash can be used to make decisions, such as how many times a loop should cycle (looping constructs) or to choose between multiple choices (conditional constructs). Of course we need to look at some of the expressions used in conjunction with the commands that do these tasks – the tests themselves – and we’ll do this in this episode.
+
This is a complex area which I had some trouble with when I first started using Bash, and there is a lot to say about it all. I have prepared a group of HPR shows about this subject, in order to do it justice, and this is the first of the group.
+
Types of test
+
There are four main types of test that can be used to make decisions in Bash:
+
+
expressions with the test command and the '[' and ']' operators
+
shell arithmetic
+
expressions with the '[[' and ']]' operators
+
Bash functions, commands and programs (in certain contexts)
+
+
Note that in some examples in this episode we are using commands which haven’t been properly explained yet, but we will be covering them later in this group of shows.
+
The '[' and ']' operators and the test command
+
Using '[expression]' or 'testexpression' is the same (except that in the former case both square brackets are required). These are built-in commands inherited by Bash from its predecessor the Bourne Shell. See the GNU Manual section on builtins for all of the details. The single square brackets and the test command are standard across other shells that confirm to the POSIX standard, so it is important to know about them.
+
Note that since '[' and ']' are considered to be equivalent to commands they must be separated from their arguments by at least one space. It is a common error for Bash beginners to write something like:
+
if [$i -gt 0]; then
+ echo "Positive"
+fi
+
This results in an error such as:
+
bash: [2: command not found
+
This is because the left square bracket and the value of variable 'i' have been concatenated since there is no intervening space.
+
The expression between square brackets (or following test) is a Conditional Expression, which is well documented in the GNU Bash Manual section. We will be looking these later in this group of shows.
+
Shell arithmetic
+
We looked at some aspects of this subject in show 1951. This show covered arithmetic expansion and mentioned the arithmetic expressions available in Bash. The type of arithmetic expansion expression we looked at was:
+
$ echo $((42/5))
+8
+
There is also a form consisting of:
+
(( expression ))
+
Note that Bash does not care whether you use spaces before and after the expression.
+
It is possible to assign values and to compare them in this expression. When being used as a test expression the numeric result is used to obtain a true/false value:
+
+
if the value of the expression is non-zero, the return status is 0 (true)
+
otherwise the return status is 1 (false)
+
+
The following downloadable example (bash9_ex1.sh) demonstrates how this type of test behaves:
+
$ cat bash9_ex1.sh
+#!/bin/bash
+
+#
+# Demonstration of the arithmetic expressions as tests
+#
+
+for i in {-3..3}; do
+ echo -n "$i: "
+ if ((i)); then
+ echo "$? true"
+ else
+ echo "$? false"
+ fi
+done
+
+exit
+$ ./bash9_ex1.sh
+-3: 0 true
+-2: 0 true
+-1: 0 true
+0: 1 false
+1: 0 true
+2: 0 true
+3: 0 true
+
All values except for zero return a result of 0 (true).
+
The expression in the double parentheses can be a numeric assignment:
+
((x = 42))
+
This would be regarded as true in a test since 42 is non-zero.
+
The expression may also be a comparison:
+
((x == 42))
+
This expression will have a non-zero value and will return the result zero (true), so in a test it can be used to check the value of an integer variable.
+
There are also Boolean (logical) operators such as && (and) and || so it is possible to write an expression such as:
+
((x == 42 && y != 42))
+
This will be true when x is 42 and y is not.
+
The arithmetic expressions section of the GNU Bash Manual lists all of the arithmetic operators available in Bash.
+
The '[[' and ']]' operators
+
We have seen the use of single square brackets (and the test command), but Bash offers an alternative using double square brackets as in:
+
[[ expression ]]
+
This form, often referred to as the extended test, also returns a status of zero or 1 after evaluating the conditional expression expression. The possible conditional expressions are documented in the GNU Bash Manual, and we will look at these later.
+
Just as with the single left square bracket the double bracket is a command (actually it’s a keyword but the way it’s used is the same), so it must be followed by a space. Spaces are also required before the closing bracket or brackets.
+
One of the differences between [ expression ] and [[ expression ]] is that with the former you should enclose the left operand in double quotes, as in:
+
[ "$n" -eq 42 ]
+
The reason is that if variable n is null or empty and there are no quotes the expression will become:
+
[ -eq 42 ]
+
This is illegal.
+
However, with the double bracket form omitting the quotes is accepted:
+
[[ $n -eq 42 ]]
+
The way in which the contents of these double brackets are processed by Bash is different; to quote the GNU Bash manual:
+
+
Word splitting and filename expansion are not performed on the words between the '[[' and ']]'; tilde expansion, parameter and variable expansion, arithmetic expansion, command substitution, process substitution, and quote removal are performed.
+
+
We covered nearly all of these expansion topics in earlier episodes of Bash Tips.
+
Bash functions, commands and programs
+
So far I have not done any episodes describing how to write Bash functions, though there have been several more advanced episodes talking about some useful functions I have written for use in scripts. Functions are able to return status values, and such functions can be used as test commands to control some of the commands mentioned. See the yes_no function discussed in episode 2096 for an example that can be used in tests.
+
Commands like grep can be used in a test:
+
if grep -q -e '^banana$' fruit.txt; then
+ echo "Found a banana"
+fi
+
Here the -q option prevents grep from reporting anything, it just returns a status. The string ‘banana’ is being searched for in the file fruit.txt. If found then grep will exit with a zero status, in other words, a true value and this will then cause the if command to execute the echo command.
+
It is also possible to run a script or compiled program (possibly one that you have written) in the same way. This relies on the script or program returning a status that signifies success or failure. Most installed tools do this, as we saw with grep.
+
Bash itself provides two commands true and false which may be programs (perhaps /bin/true and /bin/false) or, more likely, built-in commands, depending on the operating system version you are using. In my case I find that both exist:
+
$ which true false
+/bin/true
+/bin/false
+$ type true false
+true is a shell builtin
+false is a shell builtin
+
The which command finds programs and the type command reports what type of thing a command is.
+
The true and false commands return true and false status values, in the way explained below. The builtin version will normally be the preferred one.
+
In the next episode on Looping Constructs and Conditional Constructs the components marked test_commands used in the syntax descriptions are expressions that return a status of 0 (true) or non-zero (false) in the way just explained.
+
Understanding the status of a command
+
It is important to understand the concept of the return status of a command. The command:
+
$ echo "99"
+
displays the number 99 but returns a status of zero, which denotes the value true. This status is held in the special Bash variable $? which can be tested or displayed. Note that, every command resets the status, so the original value is easily lost:
+
$ echo "99"
+99
+$ echo $?
+0
+
If the command fails then a false (non-zero) value will returned:
+
+
diff --git a/eps/hpr2649/hpr2649_bash10_ex1.sh b/eps/hpr2649/hpr2649_bash10_ex1.sh
new file mode 100755
index 0000000..f01de67
--- /dev/null
+++ b/eps/hpr2649/hpr2649_bash10_ex1.sh
@@ -0,0 +1,18 @@
+#!/bin/bash
+
+#
+# Demonstration of the 'if' command
+#
+for fruit in banana apple pear kiwi; do
+ if [ "$fruit" == "banana" ]; then
+ echo "$fruit: don't eat the skin"
+ elif [ "$fruit" == "apple" ]; then
+ echo "$fruit: eat the skin or not, as you please"
+ elif [ "$fruit" == "kiwi" ]; then
+ echo "$fruit: most people remove the skin"
+ else
+ echo "$fruit: not sure how to handle this"
+ fi
+done
+
+exit
diff --git a/eps/hpr2649/hpr2649_bash10_ex2.sh b/eps/hpr2649/hpr2649_bash10_ex2.sh
new file mode 100755
index 0000000..f7f631f
--- /dev/null
+++ b/eps/hpr2649/hpr2649_bash10_ex2.sh
@@ -0,0 +1,16 @@
+#!/bin/bash
+
+#
+# Demonstration of the 'case' command
+#
+for fruit in banana apple pear kiwi nectarine; do
+ case $fruit in
+ banana) echo "$fruit: don't eat the skin" ;;
+ apple) echo "$fruit: eat the skin or not, as you please" ;;
+ kiwi) echo "$fruit: most people remove the skin" ;;
+ nectarine) echo "$fruit: leave the skin on and eat it" ;;
+ *) echo "$fruit: not sure how to advise" ;;
+ esac
+done
+
+exit
diff --git a/eps/hpr2649/hpr2649_bash10_ex3.sh b/eps/hpr2649/hpr2649_bash10_ex3.sh
new file mode 100755
index 0000000..726e0f5
--- /dev/null
+++ b/eps/hpr2649/hpr2649_bash10_ex3.sh
@@ -0,0 +1,25 @@
+#!/bin/bash
+
+#
+# Further demonstration of the 'case' command with alternative clause
+# terminators
+#
+
+i=704526
+
+echo "Number given is: $i"
+
+case $i in
+ *0*) echo "it contains a 0" ;;&
+ *1*) echo "it contains a 1" ;;&
+ *2*) echo "it contains a 2" ;;&
+ *3*) echo "it contains a 3" ;;&
+ *4*) echo "it contains a 4" ;;&
+ *5*) echo "it contains a 5" ;;&
+ *6*) echo "it contains a 6" ;;&
+ *7*) echo "it contains a 7" ;;&
+ *8*) echo "it contains a 8" ;;&
+ *9*) echo "it contains a 9" ;;
+esac
+
+exit
diff --git a/eps/hpr2649/hpr2649_full_shownotes.html b/eps/hpr2649/hpr2649_full_shownotes.html
new file mode 100755
index 0000000..4f309cb
--- /dev/null
+++ b/eps/hpr2649/hpr2649_full_shownotes.html
@@ -0,0 +1,290 @@
+
+
+
+
+
+
+
+ More ancillary Bash tips - 10 (HPR Show 2649)
+
+
+
+
+
+
+
+
+
More ancillary Bash tips - 10 (HPR Show 2649)
+
Dave Morriss
+
+
+
+
+
+
+
Table of Contents
+
+
+
Making decisions in Bash
+
This is my tenth contribution to the Bash Scripting series under the heading of Bash Tips. The previous episodes are listed below in the Links section.
+
We are currently looking at decision making in Bash, and in the last episode we examined the tests themselves. In this episode we’ll look at the constructs that use these tests: looping constructs, conditional constructs and lists of commands.
+
Note: this episode and the preceding one were originally recorded as a single episode, but because it was so long it was split into two. As a consequence the audio contains references to examples such as bash9_ex2.sh where the true name is bash10_ex1.sh. The notes have been updated as necessary but not the audio.
+
Looping Constructs
+
Bash supports a number of commands which can be used to build loops. These are documented in the Looping Constructs section of the GNU Bash Manual. We will look only at while and until here because they contain tests. We will leave for loops until a later episode.
+
while command
+
The syntax of the while command is:
+
+
whiletest_commands
+do
+ commands
+done
+
+
The commands are executed as long as test_commands return an exit status which is zero (loop while the result is true).
+
until command
+
The syntax of the until command is:
+
+
untiltest_commands
+do
+ commands
+done
+
+
The commands are executed as long as test_commands return an exit status which is non-zero (loop until the result is true).
+
Examples of while and until
+
Example 1
+
The following code snippet will print variable i and increment it while its value is less than 5, so it will output the numbers 0..4:
Note that in this example the while and do parts are both on the same line, separated by a semicolon. Also, as mentioned in the last show, the quotes around "$i" are advisable in case the variable is null, but if the variable is not initialised the loop will fail whether the quotes are used or not. Even the shellcheck tool I use to check my Bash scripts does not complain about missing quotes here.
+
Example 2
+
The next snippet will start with variable i set to 5 and decrement it down to zero:
In this case the last value printed will be 1, after which i will be decremented to 0, which will stop the loop.
+
Conditional Constructs
+
Bash offers three commands under this heading, two of which have a conditional component. The commands are if, case and select. They are documented in the Conditional Constructs section of the GNU Bash Manual. We will look only at if and case in this episode and will leave select until a later episode.
If test_commands_1 returns a status of zero then commands_1 will be executed and the if command will terminate. If the status is non-zero then any elif part will be tested, and the associated commands (commands_2 in this example) executed if the result is true. There may be zero or more of these elif parts.
+
Once the if and any elif parts are tested and they all return false, the commands in the else part (commands_3 here) will be executed. There may be zero or one else parts.
+
Note that the then part can be written on the same line as the if/elif, when separated by a semicolon.
The case command will selectively execute the command_list corresponding to the first pattern_list that matches word.
+
If a pattern_list contains multiple patterns then they are separated by the | character. The patterns are the Glob patterns we have already seen (show 2278). The pattern_list is terminated by the right parenthesis (and can be preceded by a left parenthesis if desired). The list of patterns and an associated command_list is known as a clause.
+
There is no limit to the number of case clauses. The first pattern that matches determines the command_list that is executed. There is no default pattern, but making '*' the final one – a pattern that will always match – achieves the same thing.
+
The clause terminator must be one of ';;', ';&', or ';;&', as explained below:
+
+
+
+
+
+
+
+
Terminator
+
Meaning
+
+
+
+
+
;;
+
no subsequent matches are attempted after the first pattern match
+
+
+
;&
+
execution continues with the command_list associated with the next clause, if any
+
+
+
;;&
+
causes the shell to test the patterns in the next clause, if any, and execute any associated command_list on a successful match
+
+
+
+
Examples of if and case
+
Example 3
+
In this example shows the full range of this structured command 'if' with elif and else branches:
+
fruit="apple"
+if [ "$fruit" == "banana" ]; then
+ echo "$fruit: don't eat the skin"
+elif [ "$fruit" == "apple" ]; then
+ echo "$fruit: eat the skin or not, as you please"
+elif [ "$fruit" == "kiwi" ]; then
+ echo "$fruit: most people remove the skin"
+else
+ echo "$fruit: not sure how to advise"
+fi
+
See the downloadable example script bash10_ex1.sh1 which uses the above if structure in a for loop. Run it yourself to see what it does.
+
Example 4
+
Here is the same idea using a case command:
+
fruit="apple"
+case $fruit in
+ banana) echo "$fruit: don't eat the skin" ;;
+ apple) echo "$fruit: eat the skin or not, as you please" ;;
+ kiwi) echo "$fruit: most people remove the skin";;
+ *) echo "$fruit: not sure how to advise"
+esac
+
See the downloadable example script bash10_ex2.sh2 which uses a case command similar to the above in a for loop.
+
Example 5
+
This example has been added since the audio was recorded to give an example of the use of the ;;& clause terminator in a case command.
+
The following downloadable example (bash10_ex3.sh) demonstrates this:
+
$ cat bash10_ex3.sh
+#!/bin/bash
+
+#
+# Further demonstration of the 'case' command with alternative clause
+# terminators
+#
+
+i=704526
+
+echo "Number given is: $i"
+
+case $i in
+ *0*) echo "it contains a 0" ;;&
+ *1*) echo "it contains a 1" ;;&
+ *2*) echo "it contains a 2" ;;&
+ *3*) echo "it contains a 3" ;;&
+ *4*) echo "it contains a 4" ;;&
+ *5*) echo "it contains a 5" ;;&
+ *6*) echo "it contains a 6" ;;&
+ *7*) echo "it contains a 7" ;;&
+ *8*) echo "it contains a 8" ;;&
+ *9*) echo "it contains a 9" ;;
+esac
+
+exit
+$ ./bash10_ex3.sh
+Number given is: 704526
+it contains a 0
+it contains a 2
+it contains a 4
+it contains a 5
+it contains a 6
+it contains a 7
+
The script sets variable 'i' to a 6-digit number. The number is displayed with an echo command. The case command tests the variable with glob patterns containing all of the digits 0-9. Each case clause (except the last) is terminated with the ;;& sequence which means that each clause is invoked regardless of the success or failure of the preceding one.
+
The end result is that every pattern is tested and those that match generate output. If the case clauses had used the usual ;; terminators then the case command would exit after the first match.
+
Lists of Commands
+
Bash commands can be typed in lists. The simplest list is just a series of commands (or pipelines - a subject we will look at more in later shows in the Bash Tips series), each separated by a newline.
+
However, there are other list separators such as ';', '&', '&&', and '||'. The first two, ';' and '&' are not really relevant to decision making, so we will omit these for now. However so-called AND and OR lists are relevant. These consist of commands or pipelines separated by '&&' (logical AND), and '||' (logical OR).
+
AND Lists
+
An AND list has the form:
+
command1 && command2
+
command2 is executed if, and only if, command1 returns an exit status of zero.
+
OR Lists
+
An OR list has the form
+
command1 || command2
+
command2 is executed if, and only if, command1 returns a non-zero exit status.
+
An insight into how these lists behave
+
These operators short circuit:
+
+
in the case of '&&' an attempt is being made to determine the result of applying a logical AND operation between the two operands. They both need to be true before the overall result is true. If the first operand (command1) is false then there is no need to compute the second result, the overall result must be false, so there is a short circuit.
+
in the case of '||' either or both of the operands of the logical OR operation can be true to give an overall result of true. Thus if command1 returns true nothing else need be done to determine the overall result, whereas if command1 is false, then command2 must be executed to determine the overall result.
+
+
I found it useful to consider this when using these types of lists, so I am sharing it with you.
+
Examples
+
It is common to see these used in scripts as a simplified form of decision with an explicit test as command1. For example, you might see:
+
[ -e /some/file ] || exit 1
+
Here the script will exit if the named file does not exist (we will look at the -e operator in the next episode). Note that it exits with a non-zero result so that the script itself could be used as command1 in an AND or OR list.
+
It is possible to execute several commands instead of just the exit by grouping them in curly braces ('{}'). For example:
It is necessary to type a space3 after '{' and before '}'. Also each command within the braces must end with a semicolon (or a newline).
+
This example could be written as follows, remembering that test is an alternative to '[...]':
+
test -e /home/user1/somefile || {
+ echo "Unable to find /home/user1/somefile"
+ exit 1
+}
+
As we have already seen it is possible to use any test or command which returns an exit status of zero or non-zero as command1 in a list. So the following command list is equivalent to the 'if' example above:
+
grep -q -e '^banana$' fruit.txt && echo "Found a banana"
+
However, it is my opinion that it is clearer and more understandable when the 'if' alternative is used.
The audio refers to the examples by the name they had before the one long show was split into two. What was bash9_ex2.sh has become bash10_ex1.sh.↩
+
The audio refers to the examples by the name they had before the one long show was split into two. What was bash9_ex3.sh has become bash10_ex2.sh.↩
+
Technically this should be whitespace which means one or more spaces, tabs or newlines.↩
+
+
+
+
+
+
+
diff --git a/eps/hpr2659/hpr2659_bash11_ex1.sh b/eps/hpr2659/hpr2659_bash11_ex1.sh
new file mode 100755
index 0000000..062e408
--- /dev/null
+++ b/eps/hpr2659/hpr2659_bash11_ex1.sh
@@ -0,0 +1,15 @@
+#!/bin/bash
+
+#
+# A directory we want to create if it doesn't exist
+#
+BASEDIR="/tmp/testdir"
+
+#
+# Check for the existence of the directory and create it if not found
+#
+if [[ ! -d "$BASEDIR" ]]; then
+ # Create directory and take action on failure
+ mkdir "$BASEDIR" || { echo "Failed to create $BASEDIR"; exit 1; }
+ echo "Created $BASEDIR"
+fi
diff --git a/eps/hpr2659/hpr2659_bash11_ex2.sh b/eps/hpr2659/hpr2659_bash11_ex2.sh
new file mode 100755
index 0000000..6a119b9
--- /dev/null
+++ b/eps/hpr2659/hpr2659_bash11_ex2.sh
@@ -0,0 +1,21 @@
+#!/bin/bash
+
+#
+# Read a reply from the user, then check it's not zero length
+#
+read -r -p "Please enter a string: " reply
+if [[ ${#reply} -eq 0 ]]; then
+ echo "Please provide a non-empty reply"
+else
+ echo "You said: $reply"
+fi
+
+#
+# Read a reply from the user, then check it's not zero length
+#
+read -r -p "Please enter a string: " reply
+if [[ -z $reply ]]; then
+ echo "Please provide a non-empty reply"
+else
+ echo "You said: $reply"
+fi
diff --git a/eps/hpr2659/hpr2659_bash11_ex3.sh b/eps/hpr2659/hpr2659_bash11_ex3.sh
new file mode 100755
index 0000000..2d7794c
--- /dev/null
+++ b/eps/hpr2659/hpr2659_bash11_ex3.sh
@@ -0,0 +1,9 @@
+#!/bin/bash
+
+#
+# String comparison with a pattern, using an 'extglob' type pattern
+#
+str="Further ancillary Bash tips - 11"
+if [[ $str == +([[:alnum:] -]) ]]; then
+ echo "Matched"
+fi
diff --git a/eps/hpr2659/hpr2659_bash11_ex4.sh b/eps/hpr2659/hpr2659_bash11_ex4.sh
new file mode 100755
index 0000000..84fe0be
--- /dev/null
+++ b/eps/hpr2659/hpr2659_bash11_ex4.sh
@@ -0,0 +1,12 @@
+#!/bin/bash
+
+#
+# String comparison with a pattern, using one of a list of patterns
+#
+for str in 'dog' 'pig' 'rat' 'cat' ''; do
+ if [[ $str == @(pig|dog|cat) ]]; then
+ echo "Matched '$str'"
+ else
+ echo "Didn't match '$str'"
+ fi
+done
diff --git a/eps/hpr2659/hpr2659_bash11_ex5.sh b/eps/hpr2659/hpr2659_bash11_ex5.sh
new file mode 100755
index 0000000..120624e
--- /dev/null
+++ b/eps/hpr2659/hpr2659_bash11_ex5.sh
@@ -0,0 +1,15 @@
+#!/bin/bash
+
+#
+# String comparison with a pattern in a variable. The pattern matches any
+# word that ends with 'man' that is longer than 3 letters.
+#
+pattern="+([[:word:]])man"
+echo "Pattern is: $pattern"
+for str in 'man' 'woman' 'German' 'Xman' 'romance' ''; do
+ if [[ $str == $pattern ]]; then
+ echo "Matched '$str'"
+ else
+ echo "Didn't match '$str'"
+ fi
+done
diff --git a/eps/hpr2659/hpr2659_full_shownotes.html b/eps/hpr2659/hpr2659_full_shownotes.html
new file mode 100755
index 0000000..910f65a
--- /dev/null
+++ b/eps/hpr2659/hpr2659_full_shownotes.html
@@ -0,0 +1,385 @@
+
+
+
+
+
+
+
+ Further ancillary Bash tips - 11 (HPR Show 2659)
+
+
+
+
+
+
+
+
+
Further ancillary Bash tips - 11 (HPR Show 2659)
+
Dave Morriss
+
+
+
+
+
+
+
Table of Contents
+
+
+
Making decisions in Bash
+
This is the eleventh episode in the Bash Tips sub-series. It is the third of a group of shows about making decisions in Bash.
+
In the last two episodes we saw the types of test Bash provides, and we looked briefly at some of the commands that use these tests. Now we want to start examining the expressions that can be used in these tests, and how to combine them. We will also start looking at string comparisons in extended tests.
+
Bash Conditional Expressions
+
This section is based very closely on the section of the GNU Bash Manual of the same name (and the Bash manpage). The list below is essentially the same except that the explanations are a little longer where it seemed necessary to add more detail.
+
Conditional expressions are used by the '[[' and ']]'extended test operators and the test and '[' and ']' builtin commands (see part 1, episode 2639).
+
Expressions may be unary or binary. Unary operators take a single argument to the right, whereas binary operators take two arguments to the left and right. Unary expressions are often used to examine the status of a file. There are string operators and numeric comparison operators as well.
+
When used with '[[', the '<' and '>' operators sort lexicographically using the current locale. The test command uses ASCII ordering.
+
Unless otherwise specified, primaries that operate on files follow symbolic links and operate on the target of the link, rather than the link itself.
+
+
+
-afile
+
True if file exists. This is identical in effect to -e. It has been “deprecated,” and its use is discouraged.
+
+
-bfile
+
True if file exists and is a block special file. (A block device reads and/or writes data in chunks, or blocks, in contrast to a character device, which acesses data in character units. Examples of block devices are hard drives, CDROM drives, and flash drives. Examples of character devices are keyboards, modems, sound cards.)
+
+
-cfile
+
True if file exists and is a character special file. (See the -b description for an explanation)
+
+
-dfile
+
True if file exists and is a directory.
+
+
-efile
+
True if file exists.
+
+
-ffile
+
True if file exists and is a regular file. (Not a directory or any of the other special files)
+
+
-gfile
+
True if file exists and its set-group-id bit is set. (If a directory has the sgid flag set, then a file created within that directory belongs to the group that owns the directory, not necessarily to the group of the user who created the file. This may be useful for a directory shared by a workgroup)
+
+
-hfile
+
True if file exists and is a symbolic link.
+
+
-kfile
+
True if file exists and its “sticky” bit is set. (Commonly known as the sticky bit, the save-text-mode flag is a special type of file permission. If a file has this flag set, that file will be kept in cache memory, for quicker access. However, note that on Linux systems, the sticky bit is no longer used for files, only on directories)
+
+
-pfile
+
True if file exists and is a named pipe (FIFO).
+
+
-rfile
+
True if file exists and is readable.
+
+
-sfile
+
True if file exists and has a size greater than zero.
+
+
-tfd
+
True if file descriptor fd is open and refers to a terminal. (This test option may be used to check whether the stdin[ -t 0 ] or stdout[ -t 1 ] in a given script is a terminal)
+
+
-ufile
+
True if file exists and its set-user-id bit is set. (A binary owned by root with set-user-id flag set runs with root privileges, even when an ordinary user invokes it. A file with the suid flag set shows an s in its permissions)
+
+
-wfile
+
True if file exists and is writable.
+
+
-xfile
+
True if file exists and is executable.
+
+
-Gfile
+
True if file exists and is owned by the effective group id.
+
+
-Lfile
+
True if file exists and is a symbolic link.
+
+
-Nfile
+
True if file exists and has been modified since it was last read.
+
+
-Ofile
+
True if file exists and is owned by the effective user id.
+
+
-Sfile
+
True if file exists and is a socket.
+
+
file1-effile2
+
True if file1 and file2 refer to the same device and inode numbers. (Files file1 and file2 are hard links to the same file)
+
+
file1-ntfile2
+
True if file1 is newer (according to modification date) than file2, or if file1 exists and file2 does not.
+
+
file1-otfile2
+
True if file1 is older than file2, or if file2 exists and file1 does not.
+
+
-ooptname
+
True if the shell option optname is enabled. The list of options appears in the description of the -o option to the set builtin (see The Set Builtin).
+
+
-vvarname
+
True if the shell variable varname is set (has been assigned a value).
+
+
-Rvarname
+
True if the shell variable varname is set and is a name reference.
+
+
-zstring
+
True if the length of string is zero.
+
+
-nstringorstring
+
True if the length of string is non-zero.
+
+
string1==string2orstring1=string2
+
True if the strings are equal. When used with the [[ command, this performs pattern matching as described below
+The '=' operator should be used with the test command for POSIX conformance.
+
+
string1!=string2
+
True if the strings are not equal.
+
+
string1<string2
+
True if string1 sorts before string2 lexicographically.
+
+
string1>string2
+
True if string1 sorts after string2 lexicographically.
+
+
arg1OParg2
+
OP is one of -eq, -ne, -lt, -le, -gt, or -ge. These arithmetic binary operators return true if arg1 is equal to, not equal to, less than, less than or equal to, greater than, or greater than or equal to arg2, respectively. Arg1 and arg2 may be positive or negative integers.
+
+
+
Combining expressions
+
Operators used with test and '[...]'
+
+
! expr
+
True if expr is false.
+
+
( expr )
+
Returns the value of expr. This may be used to override the normal precedence of operators.
+
+
expr1-aexpr2
+
True if both expr1 and expr2 are true.
+
+
expr1-oexpr2
+
True if either expr1 or expr2 is true.
+
+
+
Operators used with '[[...]]'
+
+
expr1&&expr2
+
True if both expr1 and expr2 are true. Differs from -a in that if expr1 returns False then expr2 is never invoked. This operator short circuits.
+
+
expr1||expr2
+
True if either expr1 or expr2 is true. Differs from -o in that if expr1 returns True then expr2 is never invoked. This operator short circuits.
+
+
+
Conditional expression examples
+
Example 1
+
if [[ ! -e "$file" ]]; then
+ echo "File $file not found; aborting"
+ exit 1
+fi
+
This is a typical use of the -e operator to test for the existence of a file that has been passed through as an argument, or is an expected constant name. It is good to make the script exit at this point and to do so with a failure result of 1. This way the caller, which may be a script, can take error action as well.
+
This can also be written as a command list as mentioned in the previous episode:
Note that this time we test for existence of the file and if the -e operator returns False the next command will be executed. This is a compound command in curly braces, and as discussed in the previous episode the two commands in it must end with semicolons inside the braces. See the appropriate section of the GNU Bash Manual for further details.
+
Example 2
+
$ cat bash11_ex1.sh
+#!/bin/bash
+
+#
+# A directory we want to create if it doesn't exist
+#
+BASEDIR="/tmp/testdir"
+
+#
+# Check for the existence of the directory and create it if not found
+#
+if [[ ! -d "$BASEDIR" ]]; then
+ # Create directory and take action on failure
+ mkdir "$BASEDIR" || { echo "Failed to create $BASEDIR"; exit 1; }
+ echo "Created $BASEDIR"
+fi
+$ ./bash11_ex1.sh
+Created /tmp/testdir
+
This might be a way to determine if a particular directory exists, and if not, create it. Note how the mkdir command is part of a command list using a logical OR. If this command fails the following command is executed. As in Example 1 this is a compound command in curly braces, containing two individual commands, echo and exit and between them they will produce an error message and exit the script.
+
This example is available as a downloadable file (bash11_ex1.sh).
+
Example 3
+
Finding the length of a string is often something a script needs to do. The string may be the output from a command, or input from the script user for example. One way to check if the string is empty is:
+
if [[ ${#reply} -eq 0 ]]; then
+ echo "Please provide a non-empty reply"
+fi
+
A better way is to use the -z operator:
+
if [[ -z $reply ]]; then
+ echo "Please provide a non-empty reply"
+fi
+
The following script demonstrates these two alternatives:
+
$ cat bash11_ex2.sh
+#!/bin/bash
+
+#
+# Read a reply from the user, then check it's not zero length
+#
+read -r -p "Please enter a string: " reply
+if [[ ${#reply} -eq 0 ]]; then
+ echo "Please provide a non-empty reply"
+else
+ echo "You said: $reply"
+fi
+
+#
+# Read a reply from the user, then check it's not zero length
+#
+read -r -p "Please enter a string: " reply
+if [[ -z $reply ]]; then
+ echo "Please provide a non-empty reply"
+else
+ echo "You said: $reply"
+fi
+$ ./bash11_ex2.sh
+Please enter a string: OK
+You said: OK
+Please enter a string:
+Please provide a non-empty reply
+
This example is available as a downloadable file (bash11_ex2.sh).
+
String comparisons
+
Because the string comparisons mentioned above are more complex (and more powerful) than other expressions, we will look at them in more detail. There also exists a binary operator which performs a regular expression match as a kind of string matching not listed in the above section. We will look at this subject in the next episode.
+
Pattern matching
+
When comparing strings with test and '[...]' (using == or the POSIX-compliant =, and !=) the two strings being compared are treated as plain strings. However, when using the extended test operators '[[...]]' it is possible to compare the left-hand argument with a pattern as the right-hand argument. See the appropriate section of the GNU Bash Manual for full details.
+
The pattern and pattern comparison were discussed in episodes 2278 and 2293 in the context of pathname expansion. However, in this case of string comparison the pattern is treated as if the extglob option were enabled. It is also possible to enable another shopt option called 'nocasematch' to make the pattern case-insensitive.
+
It is possible to perform some quite sophisticated pattern matching this way, but the pattern must not be quoted. Doing so makes it a simple string in which the pattern features are not available. According to the documentation it is possible to quote part of a pattern however, if it is desired to treat pattern metacharacters as simple characters for example1.
+
Pattern matching examples
+
Example 4
+
animal="grizzly bear"
+if [[ $animal == *bear ]]; then
+ echo "Detected a type of bear: $animal"
+fi
+
In this example the test is for any string which ends with 'bear'. It also matches 'bear' with no earlier string.
Here we try to match the title of this show with a pattern. The pattern consists of:
+
+
a POSIX character class which matches any alphabetic or numeric character - [:alnum:] (allowed only inside a character range expression)
+
a range expression which also includes a space and a hyphen as well as the character class
+
a sub-pattern (normally only allowed when extglob is enabled) which specifies one or more occurrences of the given pattern - +(pattern-list)
+
+
The result is a pattern which matches one or more alphanumeric characters, space and hyphen. This matches the show title.
+
This example is available as a downloadable file: bash11_ex3.sh
+
Example 6
+
$ cat bash11_ex4.sh
+#!/bin/bash
+
+#
+# String comparison with a pattern, using one of a list of patterns
+#
+for str in 'dog' 'pig' 'rat' 'cat' ''; do
+ if [[ $str == @(pig|dog|cat) ]]; then
+ echo "Matched '$str'"
+ else
+ echo "Didn't match '$str'"
+ fi
+done
+$ ./bash11_ex4.sh
+Matched 'dog'
+Matched 'pig'
+Didn't match 'rat'
+Matched 'cat'
+Didn't match ''
+
This example uses the @(pattern-list) form to match any one of a list of patterns where each subsidiary pattern is separated by '|' characters (alternative patterns). By this means the pattern will match any of the strings "pig", "dog" or "cat" but nothing else, not even a blank string.
+
This example is available as a downloadable file (bash11_ex4.sh) which has been run in the demonstration above.
+
Example 7
+
$ cat bash11_ex5.sh
+#!/bin/bash
+
+#
+# String comparison with a pattern in a variable. The pattern matches any
+# word that ends with 'man' that is longer than 3 letters.
+#
+pattern="+([[:word:]])man"
+echo "Pattern is: $pattern"
+for str in 'man' 'woman' 'German' 'Xman' 'romance' ''; do
+ if [[ $str == $pattern ]]; then
+ echo "Matched '$str'"
+ else
+ echo "Didn't match '$str'"
+ fi
+done
+$ ./bash11_ex5.sh
+Pattern is: +([[:word:]])man
+Didn't match 'man'
+Matched 'woman'
+Matched 'German'
+Matched 'Xman'
+Didn't match 'romance'
+Didn't match ''
+
This example matches any word that ends with 'man' with letters before 'man'. The expression '+([[:word:]])' specifies one or more characters that that match letters, numbers and the underscore
+
As an aside, I use the shellcheck tool inside vim. It checks that any scripts I type are valid, and flags any issues. It has a problem with:
+
if [[ $str == $pattern ]]; then
+
It tells me that I should quote $pattern because otherwise it might be subject to “glob matching”. Since that is precisely what I’m trying to do, this is mildly amusing.
+
In ./bash11_ex5.sh line 10:
+ if [[ $str == $pattern ]]; then
+ ^-- SC2053: Quote the rhs of == in [[ ]] to prevent glob matching.
+
This specific error can be turned off if desired.
+
The example is available as a downloadable file (bash11_ex5.sh).
At the time of writing I have not been able to get this to work and have not found any detailed documentation about how it is meant to work.↩
+
+
+
+
+
+
+
diff --git a/eps/hpr2669/hpr2669_bash12_ex1.sh b/eps/hpr2669/hpr2669_bash12_ex1.sh
new file mode 100755
index 0000000..f185a6b
--- /dev/null
+++ b/eps/hpr2669/hpr2669_bash12_ex1.sh
@@ -0,0 +1,27 @@
+#!/bin/bash
+
+# -~ -~ -~ -~ -~ -~ -~ -~ -~ -~ -~ -~ -~ -~ -~ -~ -~ -~ -~ -~
+# Experimenting with the meaning of the statement in the GNU Bash Manual:
+# "Any part of the pattern may be quoted to force the quoted portion to
+# be matched as a string."
+# -~ -~ -~ -~ -~ -~ -~ -~ -~ -~ -~ -~ -~ -~ -~ -~ -~ -~ -~ -~
+
+server="hackerpublicradio.org"
+
+#
+# Try some regular expressions in a loop. The first is a standard type, but
+# the second and third use a quoted regular expression metacharacter trying
+# different quotes.
+#
+for re in \
+ '^(hacker|hobby)publicradio\.org$' \
+ '^(hacker|hobby)publicradio"."org$' \
+ "^(hacker|hobby)publicradio'.'org$"
+do
+ echo "Using regular expression: $re"
+ if [[ $server =~ $re ]]; then
+ echo "This is HPR"
+ else
+ echo "No match"
+ fi
+done
diff --git a/eps/hpr2669/hpr2669_bash12_ex2.sh b/eps/hpr2669/hpr2669_bash12_ex2.sh
new file mode 100755
index 0000000..4eb3c78
--- /dev/null
+++ b/eps/hpr2669/hpr2669_bash12_ex2.sh
@@ -0,0 +1,13 @@
+#!/bin/bash
+
+#
+# Demonstrate the use of a regular expression to detect blank lines in a file,
+# and those containing only whitespace
+#
+
+re="^[[:digit:]]+[[:blank:]]*$"
+
+while read -r line; do
+ [[ $line =~ $re ]] && continue
+ echo "$line"
+done < <(cat -n "$0")
diff --git a/eps/hpr2669/hpr2669_bash12_ex3.sh b/eps/hpr2669/hpr2669_bash12_ex3.sh
new file mode 100755
index 0000000..abd3b4a
--- /dev/null
+++ b/eps/hpr2669/hpr2669_bash12_ex3.sh
@@ -0,0 +1,14 @@
+#!/bin/bash
+
+#
+# Demonstrate a more complex regular expression to detect matching words in
+# a file (one per line)
+#
+
+re='\<.{4,}[tl]ing\>'
+
+while read -r line; do
+ if [[ $line =~ $re ]]; then
+ echo "$line"
+ fi
+done < <(shuf -n 100 /usr/share/dict/words)
diff --git a/eps/hpr2669/hpr2669_bash12_ex4.sh b/eps/hpr2669/hpr2669_bash12_ex4.sh
new file mode 100755
index 0000000..9d46cab
--- /dev/null
+++ b/eps/hpr2669/hpr2669_bash12_ex4.sh
@@ -0,0 +1,24 @@
+#!/bin/bash
+
+#
+# Building a regular expression to match a simple-format ISO8601 date
+#
+
+re='^[0-9]{4}(-[0-9]{2}){2}$'
+
+#
+# The date is expected as the only argument
+#
+if [[ $# -ne 1 ]]; then
+ echo "Usage: $0 ISO8601_date"
+ exit 1
+fi
+
+#
+# Validate against the regex
+#
+if [[ $1 =~ $re ]]; then
+ echo "$1 is a valid date"
+else
+ echo "$1 is not a valid date"
+fi
diff --git a/eps/hpr2669/hpr2669_bash12_ex5.sh b/eps/hpr2669/hpr2669_bash12_ex5.sh
new file mode 100755
index 0000000..fdd9978
--- /dev/null
+++ b/eps/hpr2669/hpr2669_bash12_ex5.sh
@@ -0,0 +1,35 @@
+#!/bin/bash
+
+#
+# An IP address looks like this:
+# 192.168.0.5
+# Four groups of 1-3 numbers in the range 0..255 separated by dots.
+#
+re='^([0-9]{1,3}\.){3}[0-9]{1,3}$'
+
+#
+# The address is expected as the only argument
+#
+if [[ $# -ne 1 ]]; then
+ echo "Usage: $0 IP_address"
+ exit 1
+fi
+
+#
+# Validate against the regex
+#
+if [[ $1 =~ $re ]]; then
+ #
+ # Look at the components and check they are all in range
+ #
+ for d in ${1//./ }; do
+ if [[ $d -lt 0 || $d -gt 255 ]]; then
+ echo "$1 is not a valid IP address (contains $d)"
+ exit 1
+ fi
+ done
+
+ echo "$1 is a valid IP address"
+else
+ echo "$1 is not a valid IP address"
+fi
diff --git a/eps/hpr2669/hpr2669_full_shownotes.html b/eps/hpr2669/hpr2669_full_shownotes.html
new file mode 100755
index 0000000..351b1c1
--- /dev/null
+++ b/eps/hpr2669/hpr2669_full_shownotes.html
@@ -0,0 +1,534 @@
+
+
+
+
+
+
+
+ Additional ancillary Bash tips - 12 (HPR Show 2669)
+
+
+
+
+
+
+
+
+
Additional ancillary Bash tips - 12 (HPR Show 2669)
+
Dave Morriss
+
+
+
+
+
+
+
Table of Contents
+
+
+
Making decisions in Bash
+
This is the twelfth episode in the Bash Tips sub-series. It is the fourth of a group of shows about making decisions in Bash.
+
In the last three episodes we saw the types of test Bash provides, and we looked briefly at some of the commands that use these tests. We looked at conditional expressions and all of the operators Bash provides to do this. We concentrated particularly on string comparisons which use glob and extended glob patterns.
+
Now we want to look at the other form of string comparison, using regular expressions.
+
Regular Expressions
+
Regular expressions appeared in Bash around 2004, in version 3. They can only be used in extended tests ([[...]]). It took a few sub-versions of Bash before the regular expression feature stabilised, so take care when researching the subject that what you find refers to versions greater than 3.21.
+
The operator '=~' is used to compare a string with a regular expression. The string or variable to be matched is written on the left of the =~ operator and the regular expression on the right (never the other way round).
+
Let’s begin by looking at a simple example of the use of a regular expression in Bash:
+
if [[ $server =~ ^(hacker|hobby)publicradio\.org$ ]]; then
+ echo "This is HPR"
+fi
+
Here the variable 'server' is being checked against a regular expression to determine whether it either contains hackerpublicradio.org or hobbypublicradio.org. If either match then the message 'This is HPR' is displayed, otherwise nothing is displayed.
+
Things to note:
+
+
The regular expression is not enclosed in quotes (remember how this is also the case with glob and extended glob patterns in the last episode)
+
It starts with a caret ('^') which anchors it to the start of the text
+
Two alternative sub-expressions are enclosed in parentheses with a vertical bar ('|') between them; this means either 'hacker' or 'hobby' will match
+
The full-stop before 'org' is a regular expression metacharacter so needs to be escaped with a backslash ('\')
+
The regular expression ends with a '$' which anchors it to the end of the text
+
+
As usual, the return value of the regular expression is 0 (true) if the string matches the pattern, and 1 (false) otherwise. If the regular expression is syntactically incorrect, the return value is 2. The regular expression is affected by the shell option nocasematch (as previously mentioned for glob patterns).
+
If the regular expression is enclosed in quotes then it is treated as a string, not as a regular expression.
+
A common convention is to store the regular expression in a Bash variable and then use it as the right hand side of the expression. This allows the regular expression to be built without concern for the characters it contains being misinterpreted by Bash. However, if the variable is enclosed in quotes in the conditional expression this causes Bash to treat it as a string, not as a regular expression.
+
If any part of the regular expression pattern is quoted then that part is treated as a string. This is how it is described in the GNU Bash Manual:
+
+
Any part of the pattern may be quoted to force the quoted portion to be matched as a string.
+
+
You would expect this to allow regular expression metacharacters to be used literally. I have not managed to get this to work, nor have I found any advice on using it in my researches.
+
The following downloadable script included with this show contains my failed test of this feature and is listed below.
+
$ cat bash12_ex1.sh
+#!/bin/bash
+
+# -~ -~ -~ -~ -~ -~ -~ -~ -~ -~ -~ -~ -~ -~ -~ -~ -~ -~ -~ -~
+# Experimenting with the meaning of the statement in the GNU Bash Manual:
+# "Any part of the pattern may be quoted to force the quoted portion to
+# be matched as a string."
+# -~ -~ -~ -~ -~ -~ -~ -~ -~ -~ -~ -~ -~ -~ -~ -~ -~ -~ -~ -~
+
+server="hackerpublicradio.org"
+
+#
+# Try some regular expressions in a loop. The first is a standard type, but
+# the second and third use a quoted regular expression metacharacter trying
+# different quotes.
+#
+for re in \
+ '^(hacker|hobby)publicradio\.org$' \
+ '^(hacker|hobby)publicradio"."org$' \
+ "^(hacker|hobby)publicradio'.'org$"
+do
+ echo "Using regular expression: $re"
+ if [[ $server =~ $re ]]; then
+ echo "This is HPR"
+ else
+ echo "No match"
+ fi
+done
+$ ./bash12_ex1.sh
+Using regular expression: ^(hacker|hobby)publicradio\.org$
+This is HPR
+Using regular expression: ^(hacker|hobby)publicradio"."org$
+No match
+Using regular expression: ^(hacker|hobby)publicradio'.'org$
+No match
+
The script may be found here: bash12_ex1.sh if you want to experiment with it.
+
Regular Expression Syntax
+
A regular expression is a pattern that describes a set of strings. Regular expressions are constructed analogously to arithmetic expressions by using various operators to combine smaller expressions.
+
The fundamental building blocks are the regular expressions that match a single character. Most characters, including all letters and digits, are regular expressions that match themselves. Any metacharacter with special meaning may be quoted by preceding it with a backslash. Some regular expression operators contain backslashes, which may be a little confusing at first glance.
+
There are different types of regular expressions used by various tools and programming languages. Bash regular expressions use a form called extended regular expressions (ERE) and the metacharacters used within these expressions are described below:
+
+
+
+
+
+
+
+
Operator
+
Description
+
+
+
+
+
.
+
Represents any single character
+
+
+
*
+
Modifies the item to the left; the item matches zero or more times
+
+
+
?
+
Modifies the item to the left; the item matches zero or one time
+
+
+
+
+
Modifies the item to the left; the item matches one or more times
+
+
+
{n}
+
Modifier making the item to the left match exactly n times
+
+
+
{n,}
+
Modifier making the item to the left match n or more times
+
+
+
{n,m}
+
Modifier making the item to the left match between n and m times
+
+
+
{,m}
+
Modifier making the item to the left match between zero and m times2
+
+
+
^
+
Matches the null character at the start of a line
+
+
+
$
+
Matches the null character at the end of a line
+
+
+
[...]
+
Matches a single character from the set in brackets
+
+
+
|
+
Separates two regular expressions allowing alternative matches
+
+
+
(...)
+
Parentheses can enclose multiple alternative regular expressions
+
+
+
\b
+
Matches the empty string at the edge of a word
+
+
+
\B
+
Matches the empty string provided it’s not at the edge of a word
+
+
+
\<
+
Matches the empty string at the beginning of a word
+
+
+
\>
+
Matches the empty string at the end of a word
+
+
+
+
Examples
+
Demonstrations of the use of the above regular expression operators.
+
Example 1
+
Match a blank line, or a line containing only whitespace with the following:
+
^$
+^[[:blank:]]*$
+
A downloadable script demonstrating this concept:
+
$ cat bash12_ex2.sh
+#!/bin/bash
+
+#
+# Demonstrate the use of a regular expression to detect blank lines in a file,
+# and those containing only whitespace
+#
+
+re="^[[:digit:]]+[[:blank:]]*$"
+
+while read -r line; do
+ [[ $line =~ $re ]] && continue
+ echo "$line"
+done < <(cat -n "$0")
+
When run the script prints itself with line numbers but omits the blank lines:
+
$ ./bash12_ex2.sh
+1 #!/bin/bash
+3 #
+4 # Demonstrate the use of a regular expression to detect blank lines in a file,
+5 # and those containing only whitespace
+6 #
+8 re="^[[:digit:]]+[[:blank:]]*$"
+10 while read -r line; do
+11 [[ $line =~ $re ]] && continue
+12 echo "$line"
+13 done < <(cat -n "$0")
+
The variable 're' holds the regular expression we want to match against every line of the input. In this case we start with one or more digits because we’re feeding the result of cat -n to the loop and we get back lines with a number at the start. Other than the line number, we are looking for lines which only contain spaces, so we can omit them.
+
The loop is a while loop which calls read as its test expression. This will return true when it reads a line and false when there are no more. The -r option deals with backslash escapes, and isn’t strictly necessary, though it is recommended.
+
The read gets its data from the redirection after the done part of the loop. Here we see a process substitution consisting of a cat -n of the current script using argument $0 which holds the script name.
+
Inside the loop the first line is a test (in a command list) compares the latest line read from the file with the regular expression, and if it matches a continue command is called which skips to the end of the loop ready for the next iteration. If the line does not match the echo command is invoked and the line is printed.
+
So, the overall effect is to print all lines which are not blank (after the line number).
+
This example is available as a downloadable file (bash12_ex2.sh).
This example uses more of the regular expression operators listed above to match words:
+
\<.{4,}[tl]ing\>
+
Breaking this down:
+
+
+
+
Operator
+
Meaning
+
+
+
+
+
\<
+
this matches the start of a word
+
+
+
.{4,}
+
matches 4 or more characters
+
+
+
[tl]
+
matches either the letter 't' or the letter 'l'
+
+
+
ing
+
matches the letters 'ing'
+
+
+
\>
+
matches the end of a word
+
+
+
+
So we’re matching words ending in 'ing' preceded by a 't' or an 'l' with 4 or more letters (characters to be exact) before that. We will be using the dictionary in /usr/share/dict/words and extracting random words from it.
+
As before we have a downloadable script demonstrating this algorithm:
+
$ cat bash12_ex3.sh
+#!/bin/bash
+
+#
+# Demonstrate a more complex regular expression to detect matching words in
+# a file (one per line)
+#
+
+re='\<.{4,}[tl]ing\>'
+
+while read -r line; do
+ if [[ $line =~ $re ]]; then
+ echo "$line"
+ fi
+done < <(shuf -n 100 /usr/share/dict/words)
+
When run the script prints out a number of random words which match the regular expression:
The regular expression is stored in a variable. This is always wise, and particularly so in this case because there are characters in it which would have been misinterpreted by Bash if the expression had been written in the extended test.
+
Again we’re using a process substitution to run shuf, a tool which selects random lines from the nominated file, 100 lines in this case.
+
+
This example is available as a downloadable file (bash12_ex3.sh).
+
Example 3
+
This example takes a date as an argument and checks that it’s in the ISO8601 'YYYY-MM-DD' format.
+
$ cat bash12_ex4.sh
+#!/bin/bash
+
+#
+# Building a regular expression to match a simple-format ISO8601 date
+#
+
+re='^[0-9]{4}(-[0-9]{2}){2}$'
+
+#
+# The date is expected as the only argument
+#
+if [[ $# -ne 1 ]]; then
+ echo "Usage: $0 ISO8601_date"
+ exit 1
+fi
+
+#
+# Validate against the regex
+#
+if [[ $1 =~ $re ]]; then
+ echo "$1 is a valid date"
+else
+ echo "$1 is not a valid date"
+fi
+
Things to note are:
+
+
The regular expression looks for 4 digits, a hyphen, two digits, a hyphen and two digits. Since the “hyphen and two digits” part is repeated we enclose it in parentheses and add a repeat count modifier. The expression is anchored at the start and end otherwise a date like '2018-09-15-' would be valid.
+
The script makes a check on the number of arguments ('$#'), exiting with an error unless there’s one argument.
+
+
Examples of running the script:
+
$ ./bash12_ex4.sh
+Usage: ./bash12_ex4.sh ISO8601_date
+
+$ ./bash12_ex4.sh 2018-09-XX
+2018-09-XX is not a valid date
+
+$ ./bash12_ex4.sh 2018-09-15
+2018-09-15 is a valid date
+
This example is available as a downloadable file (bash12_ex4.sh).
+
Example 4
+
This example is similar to the previous one. It takes an IP address (version 4) as an argument and checks that it’s in the correct format. It performs more sophisticated validation than example 3, but it’s not using regular expressions to do this.
+
$ cat bash12_ex5.sh
+#!/bin/bash
+
+#
+# An IP address looks like this:
+# 192.168.0.5
+# Four groups of 1-3 numbers in the range 0..255 separated by dots.
+#
+re='^([0-9]{1,3}\.){3}[0-9]{1,3}$'
+
+#
+# The address is expected as the only argument
+#
+if [[ $# -ne 1 ]]; then
+ echo "Usage: $0 IP_address"
+ exit 1
+fi
+
+#
+# Validate against the regex
+#
+if [[ $1 =~ $re ]]; then
+ #
+ # Look at the components and check they are all in range
+ #
+ for d in ${1//./ }; do
+ if [[ $d -lt 0 || $d -gt 255 ]]; then
+ echo "$1 is not a valid IP address (contains $d)"
+ exit 1
+ fi
+ done
+
+ echo "$1 is a valid IP address"
+else
+ echo "$1 is not a valid IP address"
+fi
+
As mentioned, there is an extra check in this example. After confirming that the address consists of four groups of numbers it is split into its components with a parameter substitution and the components checked in a loop to ensure they are between 0 and 255. If this test fails the loop exits with an error message indicating which component failed to validate.
+
Examples of running the script:
+
$ ./bash12_ex5.sh
+Usage: ./bash12_ex5.sh IP_address
+
+$ ./bash12_ex5.sh 192.168.0.
+192.168.0. is not a valid IP address
+
+$ ./bash12_ex5.sh 192.168.0.5
+192.168.0.5 is a valid IP address
+
+$ ./bash12_ex5.sh 192.168.0.256
+192.168.0.256 is not a valid IP address (contains 256)
+
This example is available as a downloadable file (bash12_ex5.sh).
+
Capture groups
+
As well as providing a means of grouping regular expression operators – to define alternatives or to allow a modifier to apply to a sub-expressions – parentheses also define capture groups as seen when looking at sed and awk.
+
We will look at this subject in the next (and last) episode of this sub-series.
graphic characters (all characters which have graphic representation)
+
+
+
[:lower:]
+
[a-z]
+
lowercase letters
+
+
+
[:print:]
+
[[:graph] ]
+
graphic characters and space
+
+
+
[:punct:]
+
[-!"#$%&’()*+,./:;<=>?@[]^_`{ | }~]
+
all punctuation characters (all graphic characters except letters and digits)
+
+
+
[:space:]
+
[ \t\n\r\f\v]
+
all blank (whitespace) characters, including spaces, tabs, new lines, carriage returns, form feeds, and vertical tabs
+
+
+
[:upper:]
+
[A-Z]
+
uppercase letters
+
+
+
[:word:]
+
[A-Za-z0-9_]
+
word characters
+
+
+
[:xdigit:]
+
[0-9A-Fa-f]
+
hexadecimal digits
+
+
+
+
+
+
+
+
This is the version I am using:
+$ bash --version
+GNU bash, version 4.4.23(1)-release (x86_64-pc-linux-gnu)
+Copyright (C) 2016 Free Software Foundation, Inc.
+…↩
+
The bounds expression {,m} is not documented but seems to work as expected.↩
+
+
diff --git a/eps/hpr2679/hpr2679_bash13_ex1.sh b/eps/hpr2679/hpr2679_bash13_ex1.sh
new file mode 100755
index 0000000..8532817
--- /dev/null
+++ b/eps/hpr2679/hpr2679_bash13_ex1.sh
@@ -0,0 +1,23 @@
+#!/bin/bash
+
+#
+# Three word regular expression
+#
+re='^([a-zA-Z]+) +([a-zA-Z]+) +([a-zA-Z]+) *\.?'
+
+#
+# A sentence is expected as the only argument
+#
+if [[ $# -ne 1 ]]; then
+ echo "Usage: $0 sentence"
+ exit 1
+fi
+
+echo "Sentence: $1"
+if [[ $1 =~ $re ]]; then
+ echo "Matched"
+ for i in {0..3}; do
+ printf '%2d %s\n' $i "${BASH_REMATCH[$i]}"
+ done
+fi
+
diff --git a/eps/hpr2679/hpr2679_bash13_ex2.sh b/eps/hpr2679/hpr2679_bash13_ex2.sh
new file mode 100755
index 0000000..448fef1
--- /dev/null
+++ b/eps/hpr2679/hpr2679_bash13_ex2.sh
@@ -0,0 +1,50 @@
+#!/bin/bash
+
+# =~ =~ =~ =~ =~ =~ =~ =~ =~ =~ =~ =~ =~ =~ =~ =~ =~ =~ =~ =~ =~
+# IP Address parsing revisited
+# =~ =~ =~ =~ =~ =~ =~ =~ =~ =~ =~ =~ =~ =~ =~ =~ =~ =~ =~ =~ =~
+#
+# An IP address looks like this:
+# 192.168.0.5
+# Four groups of 1-3 numbers in the range 0..255 separated by dots.
+#
+re='^([0-9]{1,3})\.([0-9]{1,3})\.([0-9]{1,3})\.([0-9]{1,3})$'
+
+#
+# The address is expected as the only argument
+#
+if [[ $# -ne 1 ]]; then
+ echo "Usage: $0 IP_address"
+ exit 1
+fi
+
+#
+# Validate against the regex
+#
+if [[ $1 =~ $re ]]; then
+ #
+ # Look at the components and check they are all in range
+ #
+ errs=0
+ problems=
+ for i in {1..4}; do
+ d="${BASH_REMATCH[$i]}"
+ if [[ $d -lt 0 || $d -gt 255 ]]; then
+ ((errs++))
+ problems+="$d "
+ fi
+ done
+
+ #
+ # Report any problems found
+ #
+ if [[ $errs -gt 0 ]]; then
+ problems="${problems:0:-1}"
+ echo "$1 is not a valid IP address; contains ${problems// /, }"
+ exit 1
+ fi
+
+ echo "$1 is a valid IP address"
+else
+ echo "$1 is not a valid IP address"
+fi
diff --git a/eps/hpr2679/hpr2679_bash13_ex3.sh b/eps/hpr2679/hpr2679_bash13_ex3.sh
new file mode 100755
index 0000000..2eb2aad
--- /dev/null
+++ b/eps/hpr2679/hpr2679_bash13_ex3.sh
@@ -0,0 +1,14 @@
+#!/bin/bash
+
+# =~ =~ =~ =~ =~ =~ =~ =~ =~ =~ =~ =~ =~ =~ =~ =~ =~ =~ =~ =~ =~
+# Experimenting with backreferences in Bash regular expressions
+# =~ =~ =~ =~ =~ =~ =~ =~ =~ =~ =~ =~ =~ =~ =~ =~ =~ =~ =~ =~ =~
+
+re='(\<.{1,10}\>) \1'
+
+if [[ $1 =~ $re ]]; then
+ echo "Matched: $1"
+else
+ echo "No match: $1"
+fi
+
diff --git a/eps/hpr2679/hpr2679_bash13_ex4.sh b/eps/hpr2679/hpr2679_bash13_ex4.sh
new file mode 100755
index 0000000..8365b89
--- /dev/null
+++ b/eps/hpr2679/hpr2679_bash13_ex4.sh
@@ -0,0 +1,46 @@
+#!/bin/bash
+
+#
+# Check that the data file exists
+#
+data="bash13_ex4.txt"
+[ -e "$data" ] || { echo "File $data not found"; exit 1; }
+
+#
+# Email addresses can be:
+# 1. local-part@domain
+# 2. Name
+#
+part1='([a-zA-Z0-9_][a-zA-Z0-9_.]+@[a-zA-Z0-9.-]+)'
+part2='([^<]+)<([a-zA-Z0-9_][a-zA-Z0-9_.]+@[a-zA-Z0-9.-]+)>'
+re="^($part1|$part2)$"
+
+#
+# Read and check each line from the file
+#
+while read -r line; do
+ #
+ # Does it match the regular expression?
+ #
+ if [[ $line =~ $re ]]; then
+ #declare -p BASH_REMATCH
+ #
+ # Decide which format it is depending on whether element 2 of
+ # BASH_REMATCH is zero length
+ #
+ if [[ -z ${BASH_REMATCH[2]} ]]; then
+ # Type 2
+ name="${BASH_REMATCH[3]}"
+ email="${BASH_REMATCH[4]}"
+ else
+ # Type 1
+ name=
+ email="${BASH_REMATCH[2]}"
+ fi
+ echo "Name: $name"
+ echo "Email: $email"
+ else
+ echo "Not recognised: $line"
+ fi
+ echo
+done < "$data"
diff --git a/eps/hpr2679/hpr2679_bash13_ex4.txt b/eps/hpr2679/hpr2679_bash13_ex4.txt
new file mode 100755
index 0000000..a4619a9
--- /dev/null
+++ b/eps/hpr2679/hpr2679_bash13_ex4.txt
@@ -0,0 +1,14 @@
+A Feldspar
+mcrawfor@live.com
+.42@unknown.mars
+Joel W
+tokuhirom@mac.com
+kramulous@sbcglobal.net
+kawasaki@me.com
+S Meir
+G Flake
+R.A.Mollin
+geekoid@sbcglobal.net
+vim_use@googlegroups.com
+vim@vim.org
+B@tm@n
diff --git a/eps/hpr2679/hpr2679_full_shownotes.html b/eps/hpr2679/hpr2679_full_shownotes.html
new file mode 100755
index 0000000..269c8cd
--- /dev/null
+++ b/eps/hpr2679/hpr2679_full_shownotes.html
@@ -0,0 +1,342 @@
+
+
+
+
+
+
+
+ Extra ancillary Bash tips - 13 (HPR Show 2679)
+
+
+
+
+
+
+
+
+
Extra ancillary Bash tips - 13 (HPR Show 2679)
+
Dave Morriss
+
+
+
+
+
+
+
Table of Contents
+
+
+
Making decisions in Bash
+
This is the thirteenth episode in the Bash Tips sub-series. It is the fifth and final of a group of shows about making decisions in Bash.
+
In the last four episodes we saw the types of test Bash provides, and we looked briefly at some of the commands that use these tests. We looked at conditional expressions and all of the operators Bash provides to do this. We concentrated particularly on string comparisons which use glob and extended glob patterns then we devoted an episode to Bash regular expressions.
+
Now we want to look at the final topic within regular expressions, the use of capture groups.
+
Capture groups
+
If you have followed the series on sed or the one covering the awk language the existence of capture groups will not be a surprise to you. It’s a way in which you can group elements of a regular expression using parentheses to denote a component of the string being compared.
+
For example you might want to look for three-word sentences:
+
re='^([a-zA-Z]+) +([a-zA-Z]+) +([a-zA-Z]+) *\.?'
+
+
There are three groups. They consist of ([a-zA-Z]+) meaning one or more alphabetic characters.
+
The characters of each word are followed by one or more spaces (' +') in the first and second cases. The third case is followed by zero or more spaces and an optional full-stop.
+
The entire regular expression is anchored to the start of the string.
+
Only the words themselves are being captured by being in groups, not the intervening spaces.
+
+
We will look at a script that uses this regular expression soon.
+
BASH_REMATCH
+
Bash uses an internal read-only array called BASH_REMATCH to hold what is matched by a regular expression. The zeroth element of the array holds what the entire regular expression has matched, and the rest hold what was matched by any capture groups in the regular expression.
+
Like other regular expression systems each capture group is numbered in order of occurrence, so element 1 of BASH_REMATCH contains the first, element 2 the second and so forth.
+
In sed is is possible to refer to a capture group with a sequence such as '\1', allowing regular expressions themselves to repeat parts such as '\(cat\)\1'. This is shown by the following sed example:
+
$ echo "catcat" | sed -e 's/\(cat\)\1/match/'
+match
+
Sadly this is apparently not available in Bash – or at least nothing is documented as far as I can find. (There are references to a partial implementation, but this doesn’t seem to be something to rely on).
+
See the example 2 below for some experiments with this.
+
The following downloadable example bash13_ex1.sh demonstrates the use of BASH_REMATCH:
+
$ cat bash13_ex1.sh
+#!/bin/bash
+
+#
+# Three word regular expression
+#
+re='^([a-zA-Z]+) +([a-zA-Z]+) +([a-zA-Z]+) *\.?'
+
+#
+# A sentence is expected as the only argument
+#
+if [[ $# -ne 1 ]]; then
+ echo "Usage: $0 sentence"
+ exit 1
+fi
+
+echo "Sentence: $1"
+if [[ $1 =~ $re ]]; then
+ echo "Matched"
+ for i in {0..3}; do
+ printf '%2d %s\n' $i "${BASH_REMATCH[$i]}"
+ done
+fi
+
+
This uses the regular expression discussed above in an if command. If the regular expression matches then a message is output and in a for loop the elements of BASH_REMATCH are printed with the index.
Note that you cannot rewrite the regular expression using repetition with expectation that the capture groups will behave as the explicit form:
+
re='^(([a-zA-Z]+) *){3}\.?'
+
There is only one capture group here, which is applied three times. The result is that the regular expression matches and BASH_REMATCH[0] contains the whole matched string but elements 1 and 2 will contain the last matching word:
+
0 Aardvarks eat ants.
+ 1 ants
+ 2 ants
+
Examples
+
Example 1
+
In this example we enhance Example 4 from the last episode which checks an IP address for validity.
+
The example (bash13_ex2.sh) is downloadable from the HPR site.
+
$ cat bash13_ex2.sh
+#!/bin/bash
+
+# =~ =~ =~ =~ =~ =~ =~ =~ =~ =~ =~ =~ =~ =~ =~ =~ =~ =~ =~ =~ =~
+# IP Address parsing revisited
+# =~ =~ =~ =~ =~ =~ =~ =~ =~ =~ =~ =~ =~ =~ =~ =~ =~ =~ =~ =~ =~
+#
+# An IP address looks like this:
+# 192.168.0.5
+# Four groups of 1-3 numbers in the range 0..255 separated by dots.
+#
+re='^([0-9]{1,3})\.([0-9]{1,3})\.([0-9]{1,3})\.([0-9]{1,3})$'
+
+#
+# The address is expected as the only argument
+#
+if [[ $# -ne 1 ]]; then
+ echo "Usage: $0 IP_address"
+ exit 1
+fi
+
+#
+# Validate against the regex
+#
+if [[ $1 =~ $re ]]; then
+ #
+ # Look at the components and check they are all in range
+ #
+ errs=0
+ problems=
+ for i in {1..4}; do
+ d="${BASH_REMATCH[$i]}"
+ if [[ $d -lt 0 || $d -gt 255 ]]; then
+ ((errs++))
+ problems+="$d "
+ fi
+ done
+
+ #
+ # Report any problems found
+ #
+ if [[ $errs -gt 0 ]]; then
+ problems="${problems:0:-1}"
+ echo "$1 is not a valid IP address; contains ${problems// /, }"
+ exit 1
+ fi
+
+ echo "$1 is a valid IP address"
+else
+ echo "$1 is not a valid IP address"
+fi
Note how each group of digits is in parentheses making it a capture group. The intervening dots ('.') are outside the groups.
+
The loop which checks each group steps a value from 1 to 4, saving each element of BASH_REMATCH in a variable 'd' for convenience. If there is an error with a value lower than 0 or greater than 255 a variable 'errs' is incremented and the failing number is appended to the variable 'problems'.
+
The error count is checked once the loop has completed and if greater than zero an error message is produced with the list of problem numbers and the script exits with a false value.
+
Note that 'problems="${problems:0:-1}"' removes the last character (a trailing space) from the variable. Also '${problems// /, }' replaces all spaces in the string with a comma and a space to make a readable list.
+
Examples of running the script:
+
$ ./bash13_ex2.sh 192.168.0.
+192.168.0. is not a valid IP address
+
+$ ./bash13_ex2.sh 192.168.0.5
+192.168.0.5 is a valid IP address
+
+$ ./bash13_ex2.sh 192.168.500.256
+192.168.500.256 is not a valid IP address; contains 500, 256
+
Example 2
+
Although I could not find any official documentation about back references in Bash regular expressions there does seem to be something in the version I am using. This example demonstrates the use of this feature in a simple way.
+
A back reference consist of a backslash ('\') and a number. The number refers to the capture group, counting from the left of the regular expression.
+
It looks, after testing, as if only a single digit is catered for, so this means capture groups 1-9.
+
This example is downloadable as usual: bash13_ex3.sh
This is a moderately complex example which tries to parse a file of email addresses. The format of email addresses is quite complex, and this script does not try to be comprehensive in what it does. A Bash script is not the best way to perform this validation but it should be of interest nevertheless.
+
The formats catered for are:
+
+
local-part@domain – such as 'vim@vim.org'
+
name <local-part@domain> – such as 'HPR List <hpr@hackerpublicradio.org>'
+
+
There are others, but these are the ones most likely to be encountered.
+
This downloadable example (bash13_ex4.sh) reads data from a file (bash13_ex4.txt) which is also downloadable.
+
$ cat bash13_ex4.sh
+#!/bin/bash
+
+#
+# Check that the data file exists
+#
+data="bash13_ex4.txt"
+[ -e "$data" ] || { echo "File $data not found"; exit 1; }
+
+#
+# Email addresses can be:
+# 1. local-part@domain
+# 2. Name <local-part@domain>
+#
+part1='([a-zA-Z0-9_][a-zA-Z0-9_.]+@[a-zA-Z0-9.-]+)'
+part2='([^<]+)<([a-zA-Z0-9_][a-zA-Z0-9_.]+@[a-zA-Z0-9.-]+)>'
+re="^($part1|$part2)$"
+
+#
+# Read and check each line from the file
+#
+while read -r line; do
+ #
+ # Does it match the regular expression?
+ #
+ if [[ $line =~ $re ]]; then
+ #declare -p BASH_REMATCH
+ #
+ # Decide which format it is depending on whether element 2 of
+ # BASH_REMATCH is zero length
+ #
+ if [[ -z ${BASH_REMATCH[2]} ]]; then
+ # Type 2
+ name="${BASH_REMATCH[3]}"
+ email="${BASH_REMATCH[4]}"
+ else
+ # Type 1
+ name=
+ email="${BASH_REMATCH[2]}"
+ fi
+ echo "Name: $name"
+ echo "Email: $email"
+ else
+ echo "Not recognised: $line"
+ fi
+ echo
+done < "$data"
+
This script uses a single regular expression to match either of the formats. For convenience, because it is so long, I have build the variable 're' from the two variables 'part1' and 'part2'. The two alternative regular expressions are enclosed in parentheses, and separated by a vertical bar '|'. The entire thing is anchored at the start and end of the string. The sub-expressions are:
Here there are two capture groups. The first contains a square bracketed expression which defines any character that is not a less than sign ('<'). The modifier is a plus sign meaning one to any number of these characters.
+
Between the groups is a less than symbol which we don’t want to capture.
+
The second group is the same as the first sub-expression, and is followed by a greater than sign ('>').
+
+
A while loop with a read command is used to read from the data file which was defined earlier in the script and its existence verified.
+
Inside the loop the regular expression is compared with the line just read from the file. If it doesn’t match then the line is reported as not recognised. If it matches then the script can collect the elements from the BASH_REMATCH array and report them.
+
Because the regular expression is complex the way in which the important capture groups are written to BASH_REMATCH differs according to which sub-expression matched. The script contains a declare -p command which is commented out. Removing the '#' from this activates it; it is a way of displaying the attributes and contents of an array in Bash (as a command which could be used to build the array).
+
Doing this and looking at what happens when the script encounters addresses of the two types shows the following type of thing:
+
declare -ar BASH_REMATCH=([0]="kawasaki@me.com" [1]="kawasaki@me.com" [2]="kawasaki@me.com" [3]="" [4]="")
+Name:
+Email: kawasaki@me.com
+
+declare -ar BASH_REMATCH=([0]="S Meir <smeier@yahoo.com>" [1]="S Meir <smeier@yahoo.com>" [2]="" [3]="S Meir " [4]="smeier@yahoo.com")
+Name: S Meir
+Email: smeier@yahoo.com
+
The first address kawasaki@me.com matches the first sub-expression.
+
+
Remember that element zero of BASH_REMATCH contains everything matched by the regular expression, so we can ignore that.
+
Element one also matches everything because we have created an extra capture group by enclosing the two alternative sub-expressions in parentheses. This can also be ignored.
+
If the address matches the first sub-expression it will be written to the second element of BASH_REMATCH because this is the second capture group.
+
The third and fourth capture groups in the second sub-expression are not matched in this case so these elements of BASH_REMATCH are empty.
+
+
The second address S Meir <smeier@yahoo.com> matches the second sub-expression in the regular expression.
+
+
We can ignore BASH_REMATCH elements zero and one for the same reason as before.
+
Element 2 is empty because the address does not match the second capture group.
+
Elements three and four match the third and fourth capture groups.
+
+
The script uses the fact that element two of BASH_REMATCH is zero length ('-z') to determine which type of address was matched and to report the name and email address details accordingly.
+
Here is an excerpt from what is displayed when the script is run (with the declare command commented out):
+
+
diff --git a/eps/hpr2689/hpr2689_bash14_ex1.sh b/eps/hpr2689/hpr2689_bash14_ex1.sh
new file mode 100755
index 0000000..27cce2d
--- /dev/null
+++ b/eps/hpr2689/hpr2689_bash14_ex1.sh
@@ -0,0 +1,9 @@
+#!/bin/bash
+
+#
+# An argument-printing 'for' loop demonstration
+#
+for arg do
+ echo "$arg"
+done
+
diff --git a/eps/hpr2689/hpr2689_bash14_ex2.sh b/eps/hpr2689/hpr2689_bash14_ex2.sh
new file mode 100755
index 0000000..79c530e
--- /dev/null
+++ b/eps/hpr2689/hpr2689_bash14_ex2.sh
@@ -0,0 +1,9 @@
+#!/bin/bash
+
+#
+# A 'for' loop that uses multiple expressions for 'expr1' and 'expr3' courtesy
+# of the "comma operator"
+#
+for ((i = 1, j = 100; i <= 10; i++, j += 10)); do
+ echo "$i $j"
+done
diff --git a/eps/hpr2689/hpr2689_bash14_ex3.sh b/eps/hpr2689/hpr2689_bash14_ex3.sh
new file mode 100755
index 0000000..cba91eb
--- /dev/null
+++ b/eps/hpr2689/hpr2689_bash14_ex3.sh
@@ -0,0 +1,22 @@
+#!/bin/bash
+
+#
+# Two demonstrations of the use of 'break'
+#
+
+echo "Demo 1"
+for i in {a..c}{1..3}; do
+ echo "$i"
+ [ "$i" == "b2" ] && break
+done
+
+echo
+echo "Demo 2"
+for i in {a..c}{1..3}; do
+ for j in {1..3}; do
+ echo -n "$i "
+ [ "$i" == "b2" ] && { echo; break 2; }
+ done
+ echo
+done
+
diff --git a/eps/hpr2689/hpr2689_bash14_ex4.sh b/eps/hpr2689/hpr2689_bash14_ex4.sh
new file mode 100755
index 0000000..474e936
--- /dev/null
+++ b/eps/hpr2689/hpr2689_bash14_ex4.sh
@@ -0,0 +1,14 @@
+#!/bin/bash
+
+#
+# A demonstration of the use of 'continue'
+#
+
+for i in {a..c}{1..3}; do
+ for j in {1..3}; do
+ echo -n "$i "
+ [[ "$i" == b? ]] && { echo; continue 2; }
+ done
+ echo
+done
+
diff --git a/eps/hpr2689/hpr2689_bash14_ex5.sh b/eps/hpr2689/hpr2689_bash14_ex5.sh
new file mode 100755
index 0000000..568894e
--- /dev/null
+++ b/eps/hpr2689/hpr2689_bash14_ex5.sh
@@ -0,0 +1,11 @@
+#!/bin/bash
+
+#
+# A demonstration that anything that generates a list of words or numbers can
+# be used in a 'for' loop
+#
+
+for w in $(grep -E -v "'s$" /usr/share/dict/words | shuf -n 10); do
+ echo "$w"
+done
+
diff --git a/eps/hpr2689/hpr2689_full_shownotes.html b/eps/hpr2689/hpr2689_full_shownotes.html
new file mode 100755
index 0000000..af5627a
--- /dev/null
+++ b/eps/hpr2689/hpr2689_full_shownotes.html
@@ -0,0 +1,251 @@
+
+
+
+
+
+
+
+ Bash Tips - 14 (HPR Show 2689)
+
+
+
+
+
+
+
+
+
Bash Tips - 14 (HPR Show 2689)
+
Dave Morriss
+
+
+
+
+
+
+
Table of Contents
+
+
+
More about loops
+
This is the fourteenth episode covering useful tips about using Bash. Episodes 9-13 covered Making Decisions in Bash and in these episodes we looked at while and until loops, but not for loops. This episode is making good this deficiency, and is also looking at break and continue which are very useful when using loops.
+
The Bash for loop
+
This command has two forms described as syntax diagrams below. The diagrams are taken from the GNU Bash Manual:
+
Format 1
+
for name [ [in [words …] ] ; ] do commands; done
+
If written with the 'in words' part 'words' is a literal list or expandable item which provides a list. The loop cycles once for each member of the list, setting variable 'name' to each successive member and executing 'commands' at each iteration.
+
for colour in red green blue; do
+ echo "$colour"
+done
+
This will output the three colour names, one per line.
+
for file in *.mp3; do
+ echo "$file"
+done
+
In this case '*.mp3' will expand to a list of the files in the current directory which have an 'mp3' extension.
+
Such a list might be empty, and so the form with a null list is legal:
+
for name in ; do command; done
+
In this case the loop will not run at all.
+
This loop may be written without the 'in words' part which has a special meaning. The loop cycles through all of the positional parameters. The same effect can be obtained with:
+
for name in "$@"; do command; done
+
The following very simple downloadable example bash14_ex1.sh demonstrates the use of this form.
Invoking the script with a list of words as arguments results in the words being echoed back one per line:
+
$ ./bash14_ex1.sh Let joy be unconfined
+Let
+joy
+be
+unconfined
+
The return status of the for command is the exit status of the last command that executes within it. If there are no items in the expansion of words, no commands are executed, and the return status is zero.
+
Format 2
+
for (( expr1 ; expr2 ; expr3 )) ; do commands ; done
+
This for loop format uses numeric expressions to determine how many times to loop.
+
+
'expr1' is an arithmetic expression which is evaluated at the start; it often consists of a variable being set to a value.
+
'expr2' is also an arithmetic expression which is evaluated at each iteration of the loop; each time it evaluates to a non-zero value the commands in the loop are executed.
+
'expr3' is another arithmetic expression which is evaluated each time 'expr2' evaluates to a non-zero value.
+
+
For example:
+
for ((i = 1; i < 10; i++)); do
+ echo "$i"
+done
+
This will output the numbers 1-9, one per line.
+
There is a lot of flexibility allowed in the expressions between the double parentheses since the rules of Shell Arithmetic apply. Examples are:
+
+
Spaces are not mandated and can be used as required for clarity
+
Variable references do not need leading '$' symbols
+
Any of the shell arithmetic operators can be used
+
+
For example:
+
for ((i = ((78+20)/2)-(4*12); i != 10; i ++)); do
+ echo $i
+done
+
This one initialises 'i' with a calculation, and tests to see if it is not 10 at every iteration.
We show this one being typed as a multi-line command at the command line. It sets 'i' to 1 and 'j' to 100 using the comma operator which lets you join multiple unrelated expressions together. It loops until 'i' is 10 and at each iteration it increments 'i' by 1 and 'j' by 10, again using the comma operator. Incrementing 'j' is done using the assignment operator '+='.
+
A downloadable copy is included with this episode if you would like to experiment with this for command: bash14_ex2.sh.
+
If any of 'expr1', 'expr2' and 'expr3' is missing it evaluates to 1. Thus, the following command defines an infinite loop:
+
for (( ; ; )) ; do commands ; done
+
The return value of this type of for command is the exit status of the last command in 'commands' that is executed, or false if any of the expressions is invalid.
+
The break and continue commands
+
We have encountered these before in passing. They are both builtin commands inherited from the Bourne Shell. They are both for changing the sequence of execution of a loop (for, while, until, or select - we will look at the select command in a later episode).
+
The break command
+
break [n]
+
Exits from a loop. If 'n' is supplied it must be an integer number greater than or equal to 1. It specifies that the nth enclosing loop must be exited.
+
For example:
+
for i in {a..c}{1..3}; do
+ echo "$i"
+ [ "$i" == "b2" ] && break
+done
+
This outputs one of the letters followed by one of the numbers until the combination equals 'b2' at which point the break command is issued and the loop is exited. See episode 1884 for details of the Brace Expansion used here.
+
a1
+a2
+a3
+b1
+b2
+
This example contains a loop within a loop. The inner loop simply repeats the current value of 'i' three times using 'echo -n’ to suppress the newline. We want to exit both loops whenever 'i' gets to 'b2'.
+
for i in {a..c}{1..3}; do
+ for j in {1..3}; do
+ echo -n "$i "
+ [ "$i" == "b2" ] && { echo; break 2; }
+ done
+ echo
+done
+
Note we include an echo with the break to add the newline to the partially completed line:
+
a1 a1 a1
+a2 a2 a2
+a3 a3 a3
+b1 b1 b1
+b2
+
A downloadable script is available containing versions of the two loops we have just looked at in case you would like to experiment with them: bash14_ex3.sh.
+
The continue command
+
continue [n]
+
Resumes the next iteration of a loop. If 'n' is supplied it must be an integer number greater than or equal to 1. It specifies that the nth enclosing loop must be resumed.
+
For example, the following downloadable file bash14_ex4.sh:
+
$ cat bash14_ex4.sh
+#!/bin/bash
+
+#
+# A demonstration of the use of 'continue'
+#
+
+for i in {a..c}{1..3}; do
+ for j in {1..3}; do
+ echo -n "$i "
+ [[ "$i" == b? ]] && { echo; continue 2; }
+ done
+ echo
+done
+
+
This is a variant of the previous example with the same nested loops and potentially the same output. The difference is that when the contents of variable 'i' begins with a 'b' the continue 2 command is invoked so that only the first instance is printed and the outer loop resumes with the next letter/number combination. Note that we are using the extended test here with a glob-style string match.
The final downloadable script bash14_ex5.sh is not very complex but shows that a list in the first format for command can be anything. This example uses the much-referenced /usr/share/dict/words with the shuf command to return 10 words. I first use grep to omit all the words that end with a possessive "'s" because so many of these seem ridiculous (when would you ever use "loggerhead's" for example?):
+
$ cat bash14_ex5.sh
+#!/bin/bash
+
+#
+# A demonstration that anything that generates a list of words or numbers can
+# be used in a 'for' loop
+#
+
+for w in $(grep -E -v "'s$" /usr/share/dict/words | shuf -n 10); do
+ echo "$w"
+done
+
+
Invoking the script returns a list of random words:
+
+
diff --git a/eps/hpr2699/hpr2699_bash15_ex1.sh b/eps/hpr2699/hpr2699_bash15_ex1.sh
new file mode 100755
index 0000000..432c7cb
--- /dev/null
+++ b/eps/hpr2699/hpr2699_bash15_ex1.sh
@@ -0,0 +1,25 @@
+#!/bin/bash
+
+#-------------------------------------------------------------------------------
+# Example 1 for Bash Tips show 15 - a working example similar to clacke's
+# problem example in the comments to HPR episode 2651
+#-------------------------------------------------------------------------------
+
+#
+# Initialise an array
+#
+items=()
+
+#
+# Populate the array with random words
+#
+while read -r item; do
+ items+=( "$item" )
+done < <(grep -E -v "'s$" /usr/share/dict/words | shuf -n 5)
+
+#
+# Print the array with word numbers
+#
+for ((i = 0, j = 1; i < ${#items[@]}; i++, j++)); do
+ echo "$j: ${items[$i]}"
+done
diff --git a/eps/hpr2699/hpr2699_bash15_ex2.sh b/eps/hpr2699/hpr2699_bash15_ex2.sh
new file mode 100755
index 0000000..07f0f98
--- /dev/null
+++ b/eps/hpr2699/hpr2699_bash15_ex2.sh
@@ -0,0 +1,25 @@
+#!/bin/bash
+
+#-------------------------------------------------------------------------------
+# Example 2 for Bash Tips show 15 - you can also use a 'for' loop to load an
+# array
+#-------------------------------------------------------------------------------
+
+#
+# Initialise an array
+#
+items=()
+
+#
+# Populate the array with random words
+#
+for word in $(grep -E -v "'s$" /usr/share/dict/words | shuf -n 5); do
+ items+=( "$word" )
+done
+
+#
+# Print the array with word numbers
+#
+for ((i = 0, j = 1; i < ${#items[@]}; i++, j++)); do
+ echo "$j: ${items[$i]}"
+done
diff --git a/eps/hpr2699/hpr2699_full_shownotes.html b/eps/hpr2699/hpr2699_full_shownotes.html
new file mode 100755
index 0000000..133306d
--- /dev/null
+++ b/eps/hpr2699/hpr2699_full_shownotes.html
@@ -0,0 +1,268 @@
+
+
+
+
+
+
+
+ Bash Tips - 15 (HPR Show 2699)
+
+
+
+
+
+
+
+
+
Bash Tips - 15 (HPR Show 2699)
+
Dave Morriss
+
+
+
+
+
+
+
Table of Contents
+
+
+
Pitfalls for the unwary Bash loop user
+
This is the fifteenth episode covering useful tips for Bash users. In the last episode we looked at the 'for' loop, and prior to that we looked at 'while' and 'until' loops. In this one I want to look at some of the loop-related issues that can trip up the unwary user.
+
Loops in Bash are extremely useful, and they are not at all difficult to use in their basic forms. However, there are some perhaps less than obvious issues that can result in unexpected behaviour.
+
Feeding a loop from a pipe
+
What is a pipeline?
+
Bash contains a feature known as a pipeline which is a sequence of one or more commands separated by a vertical bar ('|') control operator where the output of one command is connected to the input of another. We will spend some time on this subject (and related areas) later in this series of Bash Tips, but for now I want to explain enough for this particular episode.
+
The series of commands and '|' control characters is called a pipeline. The connection of one command to another is called a pipe.
+
A typical example is:
+
$ echo "Hello World" | sed -e 's/^.\+$/\U&/'
+HELLO WORLD
+
Here the string "Hello World" is piped to 'sed' which replaces all characters on the line by their upper case versions.
+
What is happening here is that the 'echo' command writes the arguments it has been given on the standard output channel and the pipe passes this data to the standard input channel for 'sed' to consume, and it in turn writes to its standard output (the terminal) and the transformed version is displayed.
+
One of the key characteristics of the pipeline is that each command is executed in its own subshell. This is a separate process within the operating system which inherits settings from the parent shell (process) that created (spawned) it, but it cannot affect the parent environment. In particular, environmental variables cannot be passed back to the parent.
+
We’ll look at pipelines in more detail in later shows in the Bash Tips sub-series.
+
Piping into a loop
+
One of the common scenarios where data is piped to a loop is where the output from the 'ls' command is being processed. For example:
+
ls *.mp3 | while read name; do echo $name; done
+
Although not really a pipeline issue it is a bad idea to use 'ls' like this because the output it produces is meant to be displayed, and there are often settings, aliases or defaults which cause 'ls' to add extra characters and colour codes to the file names.1
+
This type of pipeline can work if you ensure that you are using plain 'ls' and not an alias, as shown2:
+
$ unalias ls
+$ ls *.mp3 | while read name; do echo "$name"; done
+astonish.mp3
+birettas.mp3
+dizzying.mp3
+fabled.mp3
+neckline.mp3
+overtone.mp3
+salamis.mp3
+skunked.mp3
+sniffing.mp3
+theorize.mp3
+
Regardless of this the advice it usually to avoid the use of 'ls' in this context.
+
(Note that this example does nothing useful since 'ls' itself can list files. More realistically, instead of the 'echo' such a loop might run a program or script on each of these files to do some useful work.)
+
Problems arise as a consequence of the loop running in a subshell when you want to work with variables in the loop. For example, you might want to count the files:
+
$ count=0
+$ ls *.mp3 | while read name; do ((count++)); done
+$ echo "$count"
+0
+
The count is zero – why?
+
The answer is that the 'count' variable being incremented in the loop is a copy of the one set to zero before the pipeline. Its value is being incremented in the subshell running the 'while' command, but it is discarded when the pipeline ends. Bash cannot pass back the value from the subshell.
+
A similar case was highlighted by clacke in the comments to show 2651 (Community News for September 2018):
Here, 'items' is an array (a subject we’ll be looking at soon in a forthcoming episode). It is assumed that 'produce_items' is a program or function that generates individual strings or numbers which are read by the 'read' in the loop and appended to the array. Then 'do_stuff_with' deals with all of the elements of the array.
+
This is what clacke says about it:
+
+
"items" gets updated just fine, in a subshell, and then after the pipe has finished executing, execution continues in the parent shell where the array is still empty.
+
+
What looks like instances of the same array outside and inside the loop are in fact separate arrays.
+
Avoiding the pipe pitfalls
+
We looked at the subject of process substitution in the Bash Tips series in show 6, episode 2045 (and also briefly considered the pipe problem which we’ve just examined in detail).
+
In that show we saw that the loop could be provided with data for a 'read' command by such a process:
+
$ unalias ls
+$ count=0
+$ while read name; do ((count++)); done < <(ls *.mp3)
+$ echo "$count"
+10
+
Here the 'while' loop runs in the parent process reading lines from the separate process containing the 'ls' command. This time the count is correct because we’re not counting in the subshell of a pipeline and expecting the result to be available to the parent process.3
+
The example clacke mentioned could also be remodelled as:
The downloadable script in bash15_ex1.sh demonstrates a simplified version of the above example using the (now probably infamous) /usr/share/dict/words:
+
$ cat bash15_ex1.sh
+#!/bin/bash
+
+#-------------------------------------------------------------------------------
+# Example 1 for Bash Tips show 15 - a working example similar to clacke's
+# problem example in the comments to HPR episode 2651
+#-------------------------------------------------------------------------------
+
+#
+# Initialise an array
+#
+items=()
+
+#
+# Populate the array with random words
+#
+while read -r item; do
+ items+=( "$item" )
+done < <(grep -E -v "'s$" /usr/share/dict/words | shuf -n 5)
+
+#
+# Print the array with word numbers
+#
+for ((i = 0, j = 1; i < ${#items[@]}; i++, j++)); do
+ echo "$j: ${items[$i]}"
+done
+
Invoking the script results in a list of random words:
It’s also possible to do something similar using a 'for' loop as in the following downloadable example bash15_ex2.sh:
+
$ cat bash15_ex2.sh
+#!/bin/bash
+
+#-------------------------------------------------------------------------------
+# Example 2 for Bash Tips show 15 - you can also use a 'for' loop to load an
+# array
+#-------------------------------------------------------------------------------
+
+#
+# Initialise an array
+#
+items=()
+
+#
+# Populate the array with random words
+#
+for word in $(grep -E -v "'s$" /usr/share/dict/words | shuf -n 5); do
+ items+=( "$word" )
+done
+
+#
+# Print the array with word numbers
+#
+for ((i = 0, j = 1; i < ${#items[@]}; i++, j++)); do
+ echo "$j: ${items[$i]}"
+done
+
I will leave you to try this one out; the result is the same as example 1 (with different words).
+
Using find instead of ls
+
Another improvement to the earlier file counting example would be to to avoid the use of 'ls' and instead use 'find'. This command (and a number of others in the GNU Findutils manual) warrants a whole show or set of shows because it is so full of features, but for now we’ll just look at how it can be used in this context.
+
The typical way of using 'find' is like this:
+
find directory options
+
For example, to find all files in the current directory with a suffix of '.mp3' use:
+
find . -name '*.mp3' -print
+
The '-name' option defines a glob pattern to match the files we need returned. This must be quoted otherwise Bash will expand it on the command line, and we want 'find' to do that. The '-print' option causes the file to be reported. In this case the path of the file (relative to the nominated or defaulted directory) is also reported.
Unlike 'ls' the 'find' command does not sort the files.
+
One other difference from 'ls' is that 'find' will search any subdirectories as well. The following example makes a sub-directory called 'subdir' and creates a file within it. The 'find' command limits the search to files that begin with 'a' or 'i' for brevity:
So, using 'find' rather than 'ls' the earlier example might be:
+
$ count=0
+$ while read name; do ((count++)); done < <(find . -maxdepth 1 -name "*.mp3")
+$ echo "$count"
+10
+
Using extglob-enabled extended patterns
+
Finally, let’s look at how the patterns available when the 'extglob' option is turned on can help to find files in a loop.
+
Since doing show 2293, where I looked at extended pattern matching features and the 'extglob' option enabled by the 'shopt' command, I have been using this capability a lot. As I mentioned in the show, my Debian system has 'extglob' enabled by default as part of the Bash completion extension. If your operating system does not do this you can set the option as described in show 2293.
+
The following example uses the files mentioned above where the sub-directory created earlier is still present. It uses a 'for' loop with the pattern '+(i|sa|t)*.mp3' which selects files beginning with 'i', with 'sa' and with 't'. Note that the second case contains two letters which is not something we can specify with simple glob patterns:
+
$ for f in +(i|sa|t)*.mp3; do echo "$f"; done
+salamis.mp3
+theorize.mp3
+
No files beginning with 'i' were returned; but the only one that there is exists in the sub-directory, so we know that, unlike 'find' in its default form, this search does not visit the directory.
+
Note also that the files are sorted this time and do not have the directory './' on the front.
+
This is a good way to process files in a loop in some circumstances. For more complex requirements the big guns of the 'find' command are often needed.
+
Future topics
+
There are other issues related to those we have examined here that need to be looked at in future episodes. For example:
+
+
A guide to arrays in Bash; types of arrays, how to initialise them and how to access them
+
More about the 'find' command
+
The features of the 'read' command
+
+
We will cover these topics in upcoming episodes of Bash Tips.
Also, Unix and Linux filenames can contain a wide range of characters which lead to complications which 'ls' doesn’t help with.↩
+
In case it is of interest, a group of 10 dummy *.mp3 files were generated for testing here. This was done by the following loop:
+
for w in $(grep -E -v "'s$" /usr/share/dict/words | grep -E '^.{3,8}$' | shuf -n 10); do
+touch ${w}.mp3
+done
+
Inside the command substitution the first 'grep' removes all possessive forms of words. The second one matches words between 3 and 8 characters in length, and 'shuf' then extracts 10 random words from all of that. The 'touch' command creates an empty file with the suffix '.mp3' using each word as the filename.↩
+
It didn’t occur to me at the time, but the process substitution would be the better place to unalias 'ls'. Using <(unalias ls; ls *.mp3) means the alias is only removed in the sub-process, not the main login process.↩
+
+
+
+
+
+
+
diff --git a/eps/hpr2709/hpr2709_bash16_ex1.sh b/eps/hpr2709/hpr2709_bash16_ex1.sh
new file mode 100755
index 0000000..15fe1ed
--- /dev/null
+++ b/eps/hpr2709/hpr2709_bash16_ex1.sh
@@ -0,0 +1,36 @@
+#!/bin/bash
+
+#-------------------------------------------------------------------------------
+# Example 1 for Bash Tips show 16: the difference between '*' and '@' as array
+# subscripts
+#-------------------------------------------------------------------------------
+
+#
+# Initialise an array
+#
+declare -a words
+
+#
+# Populate the array. Omit capitalised words and the weird possessives.
+# [Note: there are better ways of populating arrays as we'll see in a later
+# show]
+#
+for word in $(grep -E -v "(^[A-Z]|'s$)" /usr/share/dict/words | shuf -n 5); do
+ words+=( "$word" )
+done
+
+#
+# Report the array using '*' as the index
+#
+echo 'Using "${words[*]}"'
+for word in "${words[*]}"; do
+ echo "$word"
+done
+
+#
+# Report the array using '@' as the index
+#
+echo 'Using "${words[@]}"'
+for word in "${words[@]}"; do
+ echo "$word"
+done
diff --git a/eps/hpr2709/hpr2709_full_shownotes.html b/eps/hpr2709/hpr2709_full_shownotes.html
new file mode 100755
index 0000000..85d63d9
--- /dev/null
+++ b/eps/hpr2709/hpr2709_full_shownotes.html
@@ -0,0 +1,241 @@
+
+
+
+
+
+
+
+ Bash Tips - 16 (HPR Show 2709)
+
+
+
+
+
+
+
+
+
Bash Tips - 16 (HPR Show 2709)
+
Dave Morriss
+
+
+
+
+
+
+
Table of Contents
+
+
+
Arrays in Bash
+
This is the first of a small group of shows on the subject of arrays in Bash. It is also the sixteenth show in the Bash Tips sub-series.
+
We have encountered Bash arrays at various points throughout this sub-series, and have even seen a number of examples, but the subject has never been examined in detail. This group of shows intends to make good this deficiency.
+
Types of arrays
+
Bash offers two types of arrays: indexed and associative.
+
Both types are one-dimensional. Indexed arrays are indexed by positive integers starting at zero. The indices do not have to be sequential (indexed arrays are sparse). Associative arrays are indexed by strings (like the equivalent in Awk). Both array types are unlimited in length and may contain strings.
+
Creating arrays
+
There are several ways of creating arrays in Bash, and the methods differ slightly between the two types.
+
Creating an indexed array
+
It is possible to declare an indexed array just by using a command of the form:
+
name[subscript]=value
+
For example:
+
fruits[0]='apple'
+fruits[1]='pear'
+
Here 'fruits' is the array name, and the subscripts of the elements being initialised are 0 and 1. The values being set are the strings to the right of each equals sign.
+
The subscript must be a number or an expression which evaluates to a number:
However, the '[subscript]=' part is optional and the whole thing could be written as:
+
fruits=('apple' 'pear' 'grape')
+
Using this format the index of the element assigned is the last index assigned to by the statement plus one.
+
It is even possible to append to an already populated array thus:
+
fruits+=('banana') # append to existing data in an array
+
Note the use of the '+=' operator here. A common mistake is to try and add to an array using the plain '=' for the assignment:
+
fruits=('banana') # clear the array and start again
+
This will empty the array and write 'banana' to the zero indexed (first) element.
+
Another way to define an indexed array is with the 'declare' builtin command:
+
declare -a name
+
The '-a' option specifies that 'name' is an indexed array as opposed to other types of variables that can be created with this command.
+
There are some interesting features in the 'declare' builtin command in the context of arrays which we will look at in a later show.
+
Creating an associative array
+
As we have seen, with indexed arrays the indices can be derived implicitly (as sequential numbers), but associative arrays use strings as their indices, so these have to be defined explicitly.
+
Unlike indexed arrays, before working with an associative array it has to be declared explicitly:
+
declare -A capitals
+
Then the following syntax initialises an element:
+
name[subscript]=value
+
The subscript does not need to be quoted if it contains a space, but other characters in subscripts may need quotes. For example none of the following need to be quoted:
+
declare -A capitals
+capitals[England]='London'
+capitals[Scotland]='Edinburgh'
+capitals[Wales]='Cardiff'
+capitals[Northern Ireland]='Belfast'
+
As before the same effect can be achieved using a compound assignment, but, unlike the indexed array, the subscript cannot be omitted:
+
declare -A capitals
+capitals=([England]='London' [Scotland]='Edinburgh' [Wales]='Cardiff' [Northern Ireland]='Belfast')
+
It is also possible to populate the array at declaration time:
+
declare -A capitals=([England]='London' [Scotland]='Edinburgh' [Wales]='Cardiff' [Northern Ireland]='Belfast')
+
Using non-alphanumeric subscripts will always require quoting:
+
declare -A chars
+chars['[']='open square bracket'
+chars[']']='close square bracket'
+
Accessing array elements
+
A simple way of visualising the contents of either type of array is by using 'declare -p'. This generates a string which can be used as a command which can be used to rebuild the array if needed.
+
For example:
+
$ declare -p fruits capitals chars
+declare -a fruits=([0]="apple" [1]="pear" [2]="grape" [3]="banana")
+declare -A capitals=(["Northern Ireland"]="Belfast" [England]="London" [Wales]="Cardiff" [Scotland]="Edinburgh")
+declare -A chars=(["["]="open square bracket" ["]"]="close square bracket")
+
Note that the ordering of associative array elements is arbitrary. Note also, that the 'Northern Ireland' subscript is quoted by Bash and of course the subscripts in the 'chars' array are quoted.
+
The usual way to access array elements is with the following syntax:
+
${name[subscript]}
+
Do not omit the curly brackets
+
The curly brackets (braces) are required to avoid conflicts with Bash’s filename expansion operators. The expression: '$fruits[1]' will be parsed as the contents of a variable called 'fruits' followed by the glob range expression containing the digit '1'.
+
For the arrays we’ve been using so far these are the sort of results that result from omitting the braces:
+
$ echo $fruits[1]
+apple[1]
+$echo $capitals[1]
+[1]
+$ ls $fruits[1]
+ls: cannot access 'apple[1]': No such file or directory
+
When an array name is used without a subscript it is interpreted as the element with index zero. For an indexed array this may return an actual value, but for an associative array it depends on whether there’s an element with the string ‘0’ as a subscript.
+
$ declare -A hash=([a]=42 [b]=97 [0]='What is this?')
+$echo $hash
+What is this?
+
With curly brackets
+
Using the braces we see:
+
$ echo "${fruits[1]}"
+pear
+
Accessing all elements of an array
+
There are two special subscripts that return all elements of an array. These are '@' and '*'.
+
For example:
+
$ echo "${fruits[@]}"
+apple pear grape banana
+
The difference between '@' and '*' is only apparent when the expression is written in double quotes:
+
+
'*' - the array elements are returned as a single word separated by whatever the first character of the 'IFS'1 variable is (usually a space).
+
'@' - the array elements are returned as a list of words.
+
+
This can be seen when expanding an array in a loop.
+
The downloadable script in bash16_ex1.sh demonstrates this (especially for fans of /usr/share/dict/words):
+
$ cat bash16_ex1.sh
+#!/bin/bash
+
+#-------------------------------------------------------------------------------
+# Example 1 for Bash Tips show 16: the difference between '*' and '@' as array
+# subscripts
+#-------------------------------------------------------------------------------
+
+#
+# Initialise an array
+#
+declare -a words
+
+#
+# Populate the array. Omit capitalised words and the weird possessives.
+# [Note: there are better ways of populating arrays as we'll see in a later
+# show]
+#
+for word in $(grep -E -v "(^[A-Z]|'s$)" /usr/share/dict/words | shuf -n 5); do
+ words+=( "$word" )
+done
+
+#
+# Report the array using '*' as the index
+#
+echo 'Using "${words[*]}"'
+for word in "${words[*]}"; do
+ echo "$word"
+done
+
+#
+# Report the array using '@' as the index
+#
+echo 'Using "${words[@]}"'
+for word in "${words[@]}"; do
+ echo "$word"
+done
+
Invoking the script results in the array of random words being reported in two ways:
I knew I’d talked about 'IFS' before as I was recording the audio but forgot which show it was. Have a look at the long notes for hpr2045 if you want more information.↩
+
+
+
+
+
+
+
diff --git a/eps/hpr2719/hpr2719_bash17_ex1.sh b/eps/hpr2719/hpr2719_bash17_ex1.sh
new file mode 100755
index 0000000..a7e5869
--- /dev/null
+++ b/eps/hpr2719/hpr2719_bash17_ex1.sh
@@ -0,0 +1,34 @@
+#!/bin/bash
+
+#-------------------------------------------------------------------------------
+# Example 1 for Bash Tips show 17: Negative indices
+#-------------------------------------------------------------------------------
+
+#
+# Seed the Fibonacci sequence in an indexed array
+#
+declare -a fib=(0 1 1)
+
+#
+# Populate the rest up to (and including) the 20th element
+#
+for ((i = 3; i <= 20; i++)); do
+ fib[$i]=$((fib[i-2]+fib[i-1]))
+done
+
+#
+# Show the whole array
+#
+echo "Fibonacci sequence"
+echo "${fib[*]}"
+echo
+
+#
+# Print a few elements working backwards
+#
+for i in {-1..-4}; do
+ echo "fib[$i] = ${fib[$i]}"
+done
+
+exit
+
diff --git a/eps/hpr2719/hpr2719_bash17_ex2.sh b/eps/hpr2719/hpr2719_bash17_ex2.sh
new file mode 100755
index 0000000..2b8db2c
--- /dev/null
+++ b/eps/hpr2719/hpr2719_bash17_ex2.sh
@@ -0,0 +1,35 @@
+#!/bin/bash
+
+#-------------------------------------------------------------------------------
+# Example 2 for Bash Tips show 17: Array concatenation
+#-------------------------------------------------------------------------------
+
+#
+# Make three indexed arrays
+#
+declare -a a1 a2 a3
+
+#
+# Seed the random number generator
+#
+RANDOM=$(date +%N)
+
+#
+# Place 10 random numbers between 1..100 into the arrays a1 and a2
+#
+for ((i=1; i<=10; i++)); do
+ a1+=( $(( ( RANDOM % 100 ) + 1 )) )
+ a2+=( $(( ( RANDOM % 100 ) + 1 )) )
+done
+
+#
+# Show the results
+#
+echo "a1: ${a1[*]}"
+echo "a2: ${a2[*]}"
+
+#
+# Concatenate a1 and a2 into a3 and show the result
+#
+a3=( "${a1[@]}" "${a2[@]}" )
+echo "a3: ${a3[*]}"
diff --git a/eps/hpr2719/hpr2719_bash17_ex3.sh b/eps/hpr2719/hpr2719_bash17_ex3.sh
new file mode 100755
index 0000000..cc6c1ac
--- /dev/null
+++ b/eps/hpr2719/hpr2719_bash17_ex3.sh
@@ -0,0 +1,39 @@
+#!/bin/bash
+
+#-------------------------------------------------------------------------------
+# Example 3 for Bash Tips show 17: Using "substring expansion" to extract
+# associative array elements
+#-------------------------------------------------------------------------------
+
+#
+# Make two indexed arrays each containing 10 letters. Note: this is not the
+# best way to do this!
+#
+declare -a a1=( $(echo {a..j}) )
+declare -a a2=( $(echo {k..t}) )
+
+#
+# Build an associative array using one set of letters as subscripts and the
+# other as the values
+#
+declare -A hash
+for ((i=0; i<10; i++)); do
+ hash[${a1[$i]}]="${a2[$i]}"
+done
+
+#
+# Display the associative array contents
+#
+echo "Contents of associative array 'hash'"
+for key in "${!hash[@]}"; do
+ printf '%s=%s\n' "hash[$key]" "${hash[$key]}"
+done
+echo
+
+#
+# Walk the associative array printing pairs of values
+#
+echo "Pairs of values from array 'hash'"
+for ((i=1; i<10; i+=2)); do
+ printf '%d: %s\n' "$i" "${hash[*]:$i:2}"
+done
diff --git a/eps/hpr2719/hpr2719_bash17_ex4.sh b/eps/hpr2719/hpr2719_bash17_ex4.sh
new file mode 100755
index 0000000..38fe04a
--- /dev/null
+++ b/eps/hpr2719/hpr2719_bash17_ex4.sh
@@ -0,0 +1,39 @@
+#!/bin/bash
+
+#-------------------------------------------------------------------------------
+# Example 4 for Bash Tips show 17: Using "substring expansion" to extract
+# associative array elements
+#-------------------------------------------------------------------------------
+
+#
+# Make two indexed arrays each containing 6 random words. Note: this is not
+# the best way to do this!
+#
+declare -a a1=( $(for word in $(grep -E -v "'s$" /usr/share/dict/words | shuf -n 6); do echo $word; done) )
+declare -a a2=( $(for word in $(grep -E -v "'s$" /usr/share/dict/words | shuf -n 6); do echo $word; done) )
+
+#
+# Build an associative array using one set of words as subscripts and the
+# other as the values
+#
+declare -A hash
+for ((i=0; i<6; i++)); do
+ hash[${a1[$i]}]="${a2[$i]}"
+done
+
+#
+# Display the associative array contents
+#
+echo "Contents of associative array 'hash'"
+for key in "${!hash[@]}"; do
+ printf '%s=%s\n' "hash[$key]" "${hash[$key]}"
+done
+echo
+
+#
+# Walk the associative array printing pairs of values
+#
+echo "Pairs of values from array 'hash'"
+for ((i=1; i<6; i+=2)); do
+ printf '%d: %s\n' "$i" "${hash[*]:$i:2}"
+done
diff --git a/eps/hpr2719/hpr2719_bash17_ex5.sh b/eps/hpr2719/hpr2719_bash17_ex5.sh
new file mode 100755
index 0000000..17dcbfd
--- /dev/null
+++ b/eps/hpr2719/hpr2719_bash17_ex5.sh
@@ -0,0 +1,42 @@
+#!/bin/bash
+
+#-------------------------------------------------------------------------------
+# Example 5 for Bash Tips show 17: Trimming leading or trailing parts
+#-------------------------------------------------------------------------------
+
+#
+# Make an indexed array of root vegetables
+#
+declare -a vegs=(celeriac artichoke asparagus parsnip mangelwurzel daikon turnip)
+printf '%s\n\n' "${vegs[*]}"
+
+#
+# Demonstrate some trimming
+#
+echo "1. Removing the first character:"
+echo "${vegs[@]#?}"
+
+echo "2. Removing characters up to and including the first vowel:"
+echo "${vegs[@]#*[aeiou]}"
+
+echo "3. Removing characters up to and including the last vowel:"
+printf '[%s] ' "${vegs[@]##*[aeiou]}"
+echo
+
+echo "4. Using an extglob pattern to remove several different leading patterns:"
+shopt -s extglob
+echo "${vegs[@]#@(cele|arti|aspa|mangel)}"
+
+echo "5. Removing the last character":
+echo "${vegs[@]%?}"
+
+echo "6. Removing from the last vowel to the end:"
+echo "${vegs[@]%[aeiou]*}"
+
+echo "7. Removing from the first vowel to the end:"
+printf '[%s] ' "${vegs[@]%%[aeiou]*}"
+echo
+
+echo "8. Using an extglob pattern to remove several different trailing patterns:"
+echo "${vegs[@]%@(iac|oke|gus|nip|zel)}"
+
diff --git a/eps/hpr2719/hpr2719_full_shownotes.html b/eps/hpr2719/hpr2719_full_shownotes.html
new file mode 100755
index 0000000..4fa039d
--- /dev/null
+++ b/eps/hpr2719/hpr2719_full_shownotes.html
@@ -0,0 +1,395 @@
+
+
+
+
+
+
+
+ Bash Tips - 17 (HPR Show 2719)
+
+
+
+
+
+
+
+
+
Bash Tips - 17 (HPR Show 2719)
+
Dave Morriss
+
+
+
+
+
+
+
Table of Contents
+
+
+
Arrays in Bash
+
This is the second of a small group of shows on the subject of arrays in Bash. It is also the seventeenth show in the Bash Tips sub-series.
+
In the last show we saw the two types of arrays, and learned about the multiple ways of creating them and populating them. We also looked at how array elements and entire arrays are accessed.
+
Now we want to continue looking at array access and some of the various parameter expansion operations available.
+
Negative indices with indexed arrays
+
When we looked at indexed array subscripts in the last episode we only considered positive numbers (and the '*' and '@' special subscripts). It is also possible to use negative numbers which index relative to the end of the array. The index '-1' means the last element, '-2' the penultimate, and so forth.
+
The downloadable script in bash17_ex1.sh demonstrates a use of negative indices:
+
#!/bin/bash
+
+#-------------------------------------------------------------------------------
+# Example 1 for Bash Tips show 17: Negative indices
+#-------------------------------------------------------------------------------
+
+#
+# Seed the Fibonacci sequence in an indexed array
+#
+declare -a fib=(0 1 1)
+
+#
+# Populate the rest up to (and including) the 20th element
+#
+for ((i = 3; i <= 20; i++)); do
+ fib[$i]=$((fib[i-2]+fib[i-1]))
+done
+
+#
+# Show the whole array
+#
+echo "Fibonacci sequence"
+echo "${fib[*]}"
+echo
+
+#
+# Print a few elements working backwards
+#
+for i in {-1..-4}; do
+ echo "fib[$i] = ${fib[$i]}"
+done
+
+exit
+
+
The script seeds an indexed array called 'fib' with the start of the Fibonacci sequence. This sequence builds its elements by adding together the previous two, and that is what the 'for' loop does, up to the 20th element.
+
Note that in the 'for' loop an arithmetic expansion expression is used: $((fib[i-2]+fib[i-1])) which does not require dollar signs or curly brackets inside it.
+
The script prints all of the generated numbers then picks out the last four to demonstrate negative indexing.
There is no special syntax to concatenate one array to another. The simplest way to do this is using a command of the form:
+
array1=( "${array2[@]}" "${array3[@]}" )
+
The expression "${array2[@]}", as we already know, returns the entirety of 'array2' as a list of words. Effectively the parentheses are filled with the contents of each array as a list of separate words.
+
It is also possible to append an array to an already filled array thus:
+
array1+=( "${array4[@]}" )
+
The downloadable script in bash17_ex2.sh demonstrates array concatenation1:
+
#!/bin/bash
+
+#-------------------------------------------------------------------------------
+# Example 2 for Bash Tips show 17: Array concatenation
+#-------------------------------------------------------------------------------
+
+#
+# Make three indexed arrays
+#
+declare -a a1 a2 a3
+
+#
+# Seed the random number generator
+#
+RANDOM=$(date +%N)
+
+#
+# Place 10 random numbers between 1..100 into the arrays a1 and a2
+#
+for ((i=1; i<=10; i++)); do
+ a1+=( $(( ( RANDOM % 100 ) + 1 )) )
+ a2+=( $(( ( RANDOM % 100 ) + 1 )) )
+done
+
+#
+# Show the results
+#
+echo "a1: ${a1[*]}"
+echo "a2: ${a2[*]}"
+
+#
+# Concatenate a1 and a2 into a3 and show the result
+#
+a3=( "${a1[@]}" "${a2[@]}" )
+echo "a3: ${a3[*]}"
+
Note the use of the special 'RANDOM' variable which generates a (pseudo) random integer between 0 and 32767 on each access. To ensure the random sequence is not the same on each use the generator can be seeded which is what the command RANDOM=$(date +%N) does.
Back in episode 1648 in 2014 I described most of the Bash parameter expansion operations available, some in the context of arrays. Now I want to visit these again as well as a few more.
+
Substring expansion
+
This performs two different functions:
+
+
sub-strings can be selected from strings
+
array element subsets can be extracted from arrays
+
+
The syntax of this feature is:
+
${parameter:offset}
+${parameter:offset:length}
+
Both the offset and length are arithmetic expressions, which may be negative in some cases – which means to count backwards from the end of the string (or indexed array elements). A negative offset must be preceded by a space to stop Bash from interpreting it as another type of expansion. The negative length is only permitted with strings, not arrays. If length is omitted the remainder of the string or array after offset is returned.
+
When used with a single array element it is possible to extract parts of the string2:
+
$ declare -a planets=(mercury venus earth mars jupiter saturn uranus neptune)
+$ echo "${planets[4]:2:3}" # middle letters of 'jupiter'
+pit
+$ echo "${planets[5]: -3:2}" # first two of the last 3 letters of 'saturn'
+ur
+$ echo "${planets[5]: -3}" # last 3 letters of 'saturn'
+urn
+$ echo "${planets[6]:1:-1}" # start at letter 1 up to but not including the last letter
+ranu
+
When used with the entirety of an indexed array (subscript '@' or '*') then array elements are extracted:
+
$ echo "${planets[@]:1:3}"
+venus earth mars
+$ echo "${planets[@]: -3:2}" # count back 3 from the end, display 2 elements
+saturn uranus
+$ echo "${planets[@]: -3}"
+saturn uranus neptune
+
As mentioned, the length may not be negative when using substring expansion to select indexed array elements.
+
Experiments have shown that elements can also be extracted from associative arrays with substring expansion, though since the element order is not defined the results may not be reliable.
+
+
Note: You might want to skip this section since it’s discussing a non-documented feature which shouldn’t be used in production.
+
The downloadable script in bash17_ex3.sh demonstrates a use of "substring expansion" with associative arrays:
+
#!/bin/bash
+
+#-------------------------------------------------------------------------------
+# Example 3 for Bash Tips show 17: Using "substring expansion" to extract
+# associative array elements
+#-------------------------------------------------------------------------------
+
+#
+# Make two indexed arrays each containing 10 letters. Note: this is not the
+# best way to do this!
+#
+declare -a a1=( $(echo {a..j}) )
+declare -a a2=( $(echo {k..t}) )
+
+#
+# Build an associative array using one set of letters as subscripts and the
+# other as the values
+#
+declare -A hash
+for ((i=0; i<10; i++)); do
+ hash[${a1[$i]}]="${a2[$i]}"
+done
+
+#
+# Display the associative array contents
+#
+echo "Contents of associative array 'hash'"
+for key in "${!hash[@]}"; do
+ printf '%s=%s\n' "hash[$key]" "${hash[$key]}"
+done
+echo
+
+#
+# Walk the associative array printing pairs of values
+#
+echo "Pairs of values from array 'hash'"
+for ((i=1; i<10; i+=2)); do
+ printf '%d: %s\n' "$i" "${hash[*]:$i:2}"
+done
+
The two indexed arrays 'a1' and 'a2' are filled with a series of 10 letters and these are then used to build the test associative array 'hash'. This array is printed by the script to show what we did.
+
Note that we used the expression "${!hash[@]}" which returns a list of the subscripts for the 'hash' array. We’ll look at this in more detail shortly.
+
Note also the use of "${hash[*]:$i:2}" using '*' in the final 'printf'. This ensures that the two array elements returned are stored in one word. This allows us to use '%s' in the 'printf' format to print the two values as one.
+
The final loop in the script uses substring expansion to display pairs of array elements. It does this successfully, but it may well be that more complex examples will not work.
+
Invoking the script results in the following:
+
Contents of associative array 'hash'
+hash[a]=k
+hash[b]=l
+hash[c]=m
+hash[d]=n
+hash[e]=o
+hash[f]=p
+hash[g]=q
+hash[h]=r
+hash[i]=s
+hash[j]=t
+
+Pairs of values from array 'hash'
+1: k l
+3: m n
+5: o p
+7: q r
+9: s t
+
+
I tried another experiment like the previous one, this time using random words. I found this one worked too.
+
The downloadable script in bash17_ex4.sh contains this experiment. I will leave it for you to investigate further if this interests you.
+
+
List keys (indices or subscripts)
+
This expansion gives access to the indices, subscripts or keys of arrays. The syntax is:
+
${!name[@]}
+${!name[*]}
+
If name is an array variable, expands to the list of array indices (keys) assigned in name. If name is not an array, expands to 0 if name is set and null otherwise. When '@' is used and the expansion appears within double quotes, each key expands to a separate word.
+
This is used in bash17_ex3.sh and bash17_ex4.sh to enable the associative arrays to be printed with their keys. The 'for' loop uses:
+
for key in "${!hash[@]}"; do
+
The choice between '*' and '@' (when the expansion is written in double quotes) determines whether the keys are returned as one concatenated word or as a series of separate words.
+
Length of string or array
+
We saw this expansion in show 1648. The syntax is:
+
${#parameter}
+
+${#name[@]}
+${#name[*]}
+
In the case where parameter is a simple variable this returns the length of the contents (i.e. the length of the string produced by expanding the parameter).
+
$ veggie='kohlrabi'
+$ echo "${#veggie}"
+8
+
In the case of an array with an index of '*' or '@' then it returns the number of elements in the array:
Note that using just the name of an indexed array (without a subscript) returns the length of the first element (the '[0]' index is assumed, as we discussed last episode).
+
Removing leading or trailing parts that match a pattern
+
Again we looked at these in show 1648. There are four syntaxes listed in the manual:
In these word is a glob pattern (or an extglob pattern if enabled). The form using one or two '#' characters after the parameter removes leading characters and the one using one or two '%' characters removes trailing characters.
+
The significance of the single versus the double '#' and '%' is that in the single case the shortest leading/trailing pattern is matched. In the double case the longest leading/trailing pattern is matched.
+
The downloadable script in bash17_ex5.sh demonstrates the use of removing leading and trailing strings matching patterns in a variety of ways:
+
#!/bin/bash
+
+#-------------------------------------------------------------------------------
+# Example 5 for Bash Tips show 17: Trimming leading or trailing parts
+#-------------------------------------------------------------------------------
+
+#
+# Make an indexed array of root vegetables
+#
+declare -a vegs=(celeriac artichoke asparagus parsnip mangelwurzel daikon turnip)
+printf '%s\n\n' "${vegs[*]}"
+
+#
+# Demonstrate some trimming
+#
+echo "1. Removing the first character:"
+echo "${vegs[@]#?}"
+
+echo "2. Removing characters up to and including the first vowel:"
+echo "${vegs[@]#*[aeiou]}"
+
+echo "3. Removing characters up to and including the last vowel:"
+printf '[%s] ' "${vegs[@]##*[aeiou]}"
+echo
+
+echo "4. Using an extglob pattern to remove several different leading patterns:"
+shopt -s extglob
+echo "${vegs[@]#@(cele|arti|aspa|mangel)}"
+
+echo "5. Removing the last character":
+echo "${vegs[@]%?}"
+
+echo "6. Removing from the last vowel to the end:"
+echo "${vegs[@]%[aeiou]*}"
+
+echo "7. Removing from the first vowel to the end:"
+printf '[%s] ' "${vegs[@]%%[aeiou]*}"
+echo
+
+echo "8. Using an extglob pattern to remove several different trailing patterns:"
+echo "${vegs[@]%@(iac|oke|gus|nip|zel)}"
+
+
Note the use of 'printf' in the script. This is used to enclose the results of the trimming in square brackets in order to make the results clearer. In some cases the trimming has removed the entirety of the string which would have been harder to see if this hadn’t been done.
+
Invoking the script results in the following:
+
celeriac artichoke asparagus parsnip mangelwurzel daikon turnip
+
+1. Removing the first character:
+eleriac rtichoke sparagus arsnip angelwurzel aikon urnip
+2. Removing characters up to and including the first vowel:
+leriac rtichoke sparagus rsnip ngelwurzel ikon rnip
+3. Removing characters up to and including the last vowel:
+[c] [] [s] [p] [l] [n] [p]
+4. Using an extglob pattern to remove several different leading patterns:
+riac choke ragus parsnip wurzel daikon turnip
+5. Removing the last character:
+celeria artichok asparagu parsni mangelwurze daiko turni
+6. Removing from the last vowel to the end:
+celeri artichok asparag parsn mangelwurz daik turn
+7. Removing from the first vowel to the end:
+[c] [] [] [p] [m] [d] [t]
+8. Using an extglob pattern to remove several different trailing patterns:
+celer artich aspara pars mangelwur daikon tur
+
I realised I hadn’t discussed associative array concatenation as I was recording the audio. There is no simple way to concatenate these types of arrays. However, we will look at a way of doing this in the next episode.↩
+
I added the last example after realising there was no negative length when recording the audio.↩
Using a DIN Rail to mount a Raspberry Pi (HPR Show 2724)
+
I created DIN rail fittings for attaching my RPi 3B+ and an SSD disk
+
Dave Morriss
+
+
+
+
+
+
+
Table of Contents
+
+
+
Overview
+
A DIN Rail is a metal rail for mounting pieces of electrical equipment inside an equipment rack which is used for performing tasks in a building, in a machine, and so forth. It’s common to see DIN rails holding circuit breakers for example.
A number of people in the Maker Community have made use of these rails, and there are a number of freely available designs for stands, that can be 3D printed, on which you can mount these rails. There are also designs for mounts onto which devices like Raspberry Pis and disks can be fitted and attached to a DIN rail.
+
This show will recount my experiences with creating a compact mounting system for one of my Raspberry Pi systems. I had the help of my son and his girlfriend in 3D printing the parts for this project.
+
DIN Rail
+
There are three different designs of DIN Rail but perhaps the commonest is called the top hat. Of these there are two different sizes: one has a depth of 7.5mm and the other 15mm.
+
I found the 7.5mm type easily available on eBay and Amazon and bought a set of short lengths.
+
+1. “Top Hat” DIN Rail
+
+2. “Top Hat” DIN Rail, end view
+
Accessories
+
Stand
+
In the first instance I tried using a stand that turned out to be entirely too fragile and unstable. We printed a pair of these then wondered why we’d done it! The design came from Thingiverse.
+
The design is triangular and can accept DIN rails on both sides, which would make it more rigid. However, the thickness of the stand seems inadequate for multiple devices to be mounted on it.
+
+3. The first stand design was unstable and seemed too weak
+
Later I found what looked like a better design and used that, again from Thingiverse. This design is for the 15mm Top Hat rail, which we didn’t fully appreciate when printing it. Afterwards, as we realised the mistake the solution was to make a 7.5mm shim based on the stand geometry to make the space between the back of the rail and the stand smaller.
+
+4. The second stand design was much more robust but needed a shim for the 7.5mm rail
+
Mounting plates
+
I used the same source as the first stand for my mounting plates. I printed a plate for a Raspberry Pi 3B+ and another for an SSD disk.
+
The designs attach to the DIN rail with a hook at the top. The base of each mount is secured by a removable locking tab which is held in place by friction. See image 9 for a view of this.
+
+5. Mounting plate for RPi (1)
+
+6. Mounting plate for RPi (2)
+
+7. Mounting plate for SSD (1)
+
+8. Mounting plate for SSD (2)
+
+9. A locking tab fixes the mounting plate on the rail
+
Final result
+
The rail on its stands is very solid and stable, even with the equipment mounted. There is room for more devices on the rail, though perhaps if I load it up it will become less stable and might need to be fixed down to the shelf it will be installed on.
+
+
diff --git a/eps/hpr2724/hpr2724_img_01.png b/eps/hpr2724/hpr2724_img_01.png
new file mode 100755
index 0000000..19bf3ee
Binary files /dev/null and b/eps/hpr2724/hpr2724_img_01.png differ
diff --git a/eps/hpr2724/hpr2724_img_02.png b/eps/hpr2724/hpr2724_img_02.png
new file mode 100755
index 0000000..4b3fce6
Binary files /dev/null and b/eps/hpr2724/hpr2724_img_02.png differ
diff --git a/eps/hpr2724/hpr2724_img_03.png b/eps/hpr2724/hpr2724_img_03.png
new file mode 100755
index 0000000..82c30ae
Binary files /dev/null and b/eps/hpr2724/hpr2724_img_03.png differ
diff --git a/eps/hpr2724/hpr2724_img_04.png b/eps/hpr2724/hpr2724_img_04.png
new file mode 100755
index 0000000..d5626f1
Binary files /dev/null and b/eps/hpr2724/hpr2724_img_04.png differ
diff --git a/eps/hpr2724/hpr2724_img_05.png b/eps/hpr2724/hpr2724_img_05.png
new file mode 100755
index 0000000..f8b5418
Binary files /dev/null and b/eps/hpr2724/hpr2724_img_05.png differ
diff --git a/eps/hpr2724/hpr2724_img_06.png b/eps/hpr2724/hpr2724_img_06.png
new file mode 100755
index 0000000..b490273
Binary files /dev/null and b/eps/hpr2724/hpr2724_img_06.png differ
diff --git a/eps/hpr2724/hpr2724_img_07.png b/eps/hpr2724/hpr2724_img_07.png
new file mode 100755
index 0000000..b6dd3a2
Binary files /dev/null and b/eps/hpr2724/hpr2724_img_07.png differ
diff --git a/eps/hpr2724/hpr2724_img_08.png b/eps/hpr2724/hpr2724_img_08.png
new file mode 100755
index 0000000..94352bf
Binary files /dev/null and b/eps/hpr2724/hpr2724_img_08.png differ
diff --git a/eps/hpr2724/hpr2724_img_09.png b/eps/hpr2724/hpr2724_img_09.png
new file mode 100755
index 0000000..79704ff
Binary files /dev/null and b/eps/hpr2724/hpr2724_img_09.png differ
diff --git a/eps/hpr2724/hpr2724_img_10.png b/eps/hpr2724/hpr2724_img_10.png
new file mode 100755
index 0000000..1946bca
Binary files /dev/null and b/eps/hpr2724/hpr2724_img_10.png differ
diff --git a/eps/hpr2729/hpr2729_bash18_ex1.sh b/eps/hpr2729/hpr2729_bash18_ex1.sh
new file mode 100755
index 0000000..aa59ffd
--- /dev/null
+++ b/eps/hpr2729/hpr2729_bash18_ex1.sh
@@ -0,0 +1,49 @@
+#!/bin/bash
+
+#-------------------------------------------------------------------------------
+# Example 1 for Bash Tips show 18: transforming words in a string
+#
+# This is a contrived and overly complex example to show that replacing
+# selected words in a phrase by different words is a non-trivial exercise!
+#-------------------------------------------------------------------------------
+
+#
+# Enable extglob
+#
+shopt -s extglob
+
+#
+# What we'll work on and where we'll store the transformed version
+#
+phrase='Now is the time for all good men to come to the aid of the party'
+newphrase=
+
+#
+# How to transform words in the phrase and a place to store the keys
+#
+declare -A transform=([good]='bad' [men]='people' [party]='Community')
+declare -a keys
+
+#
+# Build an extglob pattern from the keys of the associative array
+#
+keys=( ${!transform[@]} )
+targets="${keys[*]/%/|}" # Each key is followed by a '|'
+targets="${targets%|}" # Strip the last '|'
+targets="${targets// }" # Some spaces got in there too
+pattern="@(${targets})" # Make the pattern at last
+
+#
+# Go word by word; if a word matches the pattern replace it by what's in the
+# 'transform' array. Because the pattern has been built from the array keys
+# we'll never have the case where a word doesn't have a transformation.
+#
+for word in $phrase; do
+ if [[ $word == $pattern ]]; then
+ word=${transform[$word]}
+ fi
+ newphrase+="$word "
+done
+echo "$newphrase"
+
+exit
diff --git a/eps/hpr2729/hpr2729_bash18_ex2.sh b/eps/hpr2729/hpr2729_bash18_ex2.sh
new file mode 100755
index 0000000..241a502
--- /dev/null
+++ b/eps/hpr2729/hpr2729_bash18_ex2.sh
@@ -0,0 +1,32 @@
+#!/bin/bash
+
+#-------------------------------------------------------------------------------
+# Example 2 for Bash Tips show 18: transforming words in a string, second
+# simpler method that depends on the '-v' operator
+#-------------------------------------------------------------------------------
+
+#
+# What we'll work on and where we'll store the transformed version
+#
+phrase='Now is the time for all good men to come to the aid of the party'
+newphrase=
+
+#
+# How to transform words in the phrase
+#
+declare -A transform=([good]='bad' [men]='people' [aid]='assistance'
+ [party]='Community')
+
+#
+# Go word by word; if an element with the word as a key exists in the
+# 'transform' array replace the word by what's there.
+#
+for word in $phrase; do
+ if [[ -v transform[$word] ]]; then
+ word=${transform[$word]}
+ fi
+ newphrase+="$word "
+done
+echo "$newphrase"
+
+exit
diff --git a/eps/hpr2729/hpr2729_full_shownotes.html b/eps/hpr2729/hpr2729_full_shownotes.html
new file mode 100755
index 0000000..7f675d0
--- /dev/null
+++ b/eps/hpr2729/hpr2729_full_shownotes.html
@@ -0,0 +1,298 @@
+
+
+
+
+
+
+
+ Bash Tips - 18 (HPR Show 2729)
+
+
+
+
+
+
+
+
+
Bash Tips - 18 (HPR Show 2729)
+
Dave Morriss
+
+
+
+
+
+
+
Table of Contents
+
+
+
Arrays in Bash
+
This is the third of a small group of shows on the subject of arrays in Bash. It is also the eighteenth show in the Bash Tips sub-series.
+
In the last show we looked at ways of accessing elements with negative indices and how to concatenate arrays. We then launched into parameter expansion in the context of arrays.
+
There are a few more parameter expansion operations to look at in this episode, then in the next episode we will look in more depth at the declare built in command and at some of the commands that assist with loading data into arrays.
+
More parameter expansion operations and arrays
+
String replacement
+
This expansion performs a single replacement within a parameter string or repeats the replacement throughout the entire string. It can also perform the same type of operations on array elements.
+
The syntax is:
+
${parameter/pattern/string}
+
The pattern is a glob or extglob pattern. The parameter is expanded and a search carried out for the longest match with pattern, which is replaced with string.
+
If there is no string the matched pattern is deleted. It is acceptable to simplify the expression if that is so:
+
${parameter/pattern}
+
The first character of pattern affects the search:
+
+
'/' - all matches of pattern are replaced by string
+
'#' - must match at the beginning of the expanded value of parameter
+
'%' - must match at the end of the expanded value of parameter
+
+
If the pattern needs to match one of these characters then it can be escaped by preceding it with a backslash.
+
Examples using simple variables
+
Using an old phrase which was used as a typing exercise we simply replace the word 'men' by 'people':
+
$ phrase='Now is the time for all good men to come to the aid of the party'
+$ echo "${phrase/men/people}"
+Now is the time for all good people to come to the aid of the party
+
Using '/' as the first character of pattern we replace all occurrences of 'the' by 'THE':
+
$ echo "${phrase//the/THE}"
+Now is THE time for all good men to come to THE aid of THE party
+
Using an extglob pattern 'the' and 'to' are replaced by 'X':
+
$ shopt -s extglob
+$ echo "${phrase//@(the|to)/X}"
+Now is X time for all good men X come X X aid of X party
+
Unfortunately it is not possible to vary the replacement string depending on the match. It would be necessary to write something more complex such as a loop to achieve this. See below for an example script which performs such a task.
+
Replace the 'N' followed by two letters at the start of the string by 'XXX':
+
$ echo "${phrase/#N??/XXX}"
+XXX is the time for all good men to come to the aid of the party
+
Matching a pattern which starts with '/', '#', '%' requires escaping the leading character:
Note that in the second case, because the '#' is not the first character it does not need to be escaped.
+
Examples using arrays
+
If the parameter is an array expression using '@' or '*' as an index then the substitution operation is applied to each member of the array in turn.
+
Here we declare an indexed array whose elements are the words from the example phrase above. We replace the first letter of each element by 'X':
+
$ declare -a words=( $phrase )
+$ echo "${words[@]/?/X}"
+Xow Xs Xhe Xime Xor Xll Xood Xen Xo Xome Xo Xhe Xid Xf Xhe Xarty
+
Here the last letter of each element is replaced by 'X':
+
$ echo "${words[@]/%?/X}"
+NoX iX thX timX foX alX gooX meX tX comX tX thX aiX oX thX partX
+
If the pattern consists of '#' or '%' on its own it is possible to add text to each element at the start or the end. Here we first add '=> ' to the start of each element, then ' <=' to the end:
+
$ echo "${words[@]/#/=> }"
+=> Now => is => the => time => for => all => good => men => to => come => to => the => aid => of => the => party
+$ echo "${words[@]/%/ <=}"
+Now <= is <= the <= time <= for <= all <= good <= men <= to <= come <= to <= the <= aid <= of <= the <= party <=
+
It is possible for the string part to be a reference to another variable (or even a command substitution):
+
$ echo "${words[@]/#/${words[1]} }"
+is Now is is is the is time is for is all is good is men is to is come is to is the is aid is of is the is party
+
However the value is derived once before the multiple substitutions begin, since this statement is not a script and is executed internally by Bash:
+
$ echo "${words[@]/#/$RANDOM }"
+9559 Now 9559 is 9559 the 9559 time 9559 for 9559 all 9559 good 9559 men 9559 to 9559 come 9559 to 9559 the 9559 aid 9559 of 9559 the 9559 party
+
The 'RANDOM' variable was accessed only once and its value used repeatedly.
+
Changing case
+
The final parameter expansion we’ll look at modifies the case of alphabetic characters in parameter. The syntax definition (from the GNU Bash manual) are:
The first pair change to upper case, and the second pair to lower case. It is important to understand the way this expansion expression is described in the manual:
+
+
Each character in the expanded value of parameter is tested against pattern, and, if it matches the pattern, its case is converted. The pattern should not attempt to match more than one character.
+
+
This means that the pattern matches only one character, not a word or similar.
+
Also, there’s this:
+
+
… the '^' and ',' expansions match and convert only the first character in the expanded value.
+
+
This can catch the unwary - it certainly caught me until I read the description properly!
+
Where the '^' or ',' is doubled the case changing operation is performed on every matching character, otherwise it operates on the first character only.
+
If the pattern is omitted, it is treated like a '?', which matches every character.
+
If the parameter is an array variable with '@' or '*' as a subscript then the case changing operations are carried out on each element.
+
Examples using simple variables
+
Using the phrase from earlier, we can alter the case of every vowel:
+
$ echo "${phrase^^[aeiou]}"
+NOw Is thE tImE fOr All gOOd mEn tO cOmE tO thE AId Of thE pArty
+
Note that the following expression actually does nothing, as should be apparent from the second extract from the manual above:
+
$ echo "${phrase^[aeiou]}"
+Now is the time for all good men to come to the aid of the party
+
Contrary to what you might expect, the first vowel is not converted. Instead the pattern is compared with the first letter 'N' which it doesn’t match - so nothing is done.
+
Examples using arrays
+
If we work with the array built earlier and use the vowel pattern:
+
$ echo "${words[@]^[aeiou]}"
+Now Is the time for All good men to come to the Aid Of the party
+
Now, the pattern has matched any array element that starts with a vowel and has made that leading vowel upper case.
+
The next example operates on all vowels in each element to give the same result as earlier when working with the variable called 'phrase':
+
$ echo "${words[@]^^[aeiou]}"
+NOw Is thE tImE fOr All gOOd mEn tO cOmE tO thE AId Of thE pArty
+
This one operates on all non-vowels (consonants):
+
$ echo "${words[@]^^[^aeiou]}"
+NoW iS THe TiMe FoR aLL GooD MeN To CoMe To THe aiD oF THe PaRTY
+
Don’t be tempted to try something like the following:
+
$ echo "${words[@]^^@(good|men)}"
+Now is the time for all good men to come to the aid of the party
+
Remember the description in the manual: The pattern should not attempt to match more than one character.
+
Examples
+
Example 1
+
I have included a downloadable script bash18_ex1.sh which shows a way of transforming individual words in text the "hard way":
+
#!/bin/bash
+
+#-------------------------------------------------------------------------------
+# Example 1 for Bash Tips show 18: transforming words in a string
+#
+# This is a contrived and overly complex example to show that replacing
+# selected words in a phrase by different words is a non-trivial exercise!
+#-------------------------------------------------------------------------------
+
+#
+# Enable extglob
+#
+shopt -s extglob
+
+#
+# What we'll work on and where we'll store the transformed version
+#
+phrase='Now is the time for all good men to come to the aid of the party'
+newphrase=
+
+#
+# How to transform words in the phrase and a place to store the keys
+#
+declare -A transform=([good]='bad' [men]='people' [party]='Community')
+declare -a keys
+
+#
+# Build an extglob pattern from the keys of the associative array
+#
+keys=( ${!transform[@]} )
+targets="${keys[*]/%/|}" # Each key is followed by a '|'
+targets="${targets%|}" # Strip the last '|'
+targets="${targets// }" # Some spaces got in there too
+pattern="@(${targets})" # Make the pattern at last
+
+#
+# Go word by word; if a word matches the pattern replace it by what's in the
+# 'transform' array. Because the pattern has been built from the array keys
+# we'll never have the case where a word doesn't have a transformation.
+#
+for word in $phrase; do
+ if [[ $word == $pattern ]]; then
+ word=${transform[$word]}
+ fi
+ newphrase+="$word "
+done
+echo "$newphrase"
+
+exit
+
I have put a lot of comments into the script, but since it uses many of the actions that we have looked at in this series I hope you’ll be able to understand it with no trouble.
+
The principle is that just by adding the words to be matched and their replacements into the array 'transform' will cause the transformation to happen, because (in this case) the script has generated an extglob pattern from the keys.
+
Running the script generates the following output:
+
Now is the time for all bad people to come to the aid of the Community
+
+
Example 2
+
There are other ways of doing this of course; for example the loop could just check if the current word is in the 'transform' array and replace the word if so or leave the word alone if not. However, there is no explicit "does key X exist in this associative array" feature in Bash so it’s not obvious.
+
I have included a second downloadable script bash18_ex2.sh which shows a simpler way of transforming individual words in text. This one uses the '-v varname' operator which we looked at in show 2659:
+
#!/bin/bash
+
+#-------------------------------------------------------------------------------
+# Example 2 for Bash Tips show 18: transforming words in a string, second
+# simpler method that depends on the '-v' operator
+#-------------------------------------------------------------------------------
+
+#
+# What we'll work on and where we'll store the transformed version
+#
+phrase='Now is the time for all good men to come to the aid of the party'
+newphrase=
+
+#
+# How to transform words in the phrase
+#
+declare -A transform=([good]='bad' [men]='people' [aid]='assistance'
+ [party]='Community')
+
+#
+# Go word by word; if an element with the word as a key exists in the
+# 'transform' array replace the word by what's there.
+#
+for word in $phrase; do
+ if [[ -v transform[$word] ]]; then
+ word=${transform[$word]}
+ fi
+ newphrase+="$word "
+done
+echo "$newphrase"
+
+exit
+
The '-v varname' operator returns true if the shell variable varname is set (has been assigned a value). Note that the script used '-v transform[$word]' - the name of the array with a subscript.
+
Running the script (in which the 'transform' array is very slightly different) generates the following output:
+
Now is the time for all bad people to come to the assistance of the Community
+
When I see a Bash script these days I usually find myself looking for ways to rewrite it to make it fit in with what I have been learning while doing my Bash Tips sub-series. Either that or I find it’s got some better ideas than I’ve been using which I have to find out about.
+
I also spend time going over my own old scripts (I was writing them in the 1990’s in some cases) and trying to incorporate newer Bash features.
+
Suffice it to say that I spotted some areas for improvement in Ken’s script and thought this might be the way to share my thoughts about them. We’re low on shows as I write this, so that gave me more motivation to make a show rather than add a comment or send Ken an email.
+
Apology: I’m still suffering from the aftermath of some flu-like illness so have had to edit coughing fits out of the audio at various points. If you detect any remnants then I’m sorry!
+
General issues
+
There are a few uses of while loops in the script that I would be inclined to rewrite, but as it stands the script does what is wanted without these changes.
+
I loaded the script into Vim where I have the ShellCheck tool configured to examine Bash scripts. It found many issues, some fairly trivial, but a few were a bit more serious.
+
Click on the thumbnail to get an idea of what ShellCheck reported.
+
+
Here are some of the reported issues:
+
+
[34] ShellCheck doesn’t like "${logfile}".$(/bin/date +%Y%m%d%H%M%S) but would be happy with "${logfile}.$(/bin/date +%Y%m%d%H%M%S)". It’s not clever enough to know that the date call will not generate spaces.
+
[39,48] Using read without the -r option means that any backslash characters in the input will not be handled properly so ShellCheck reports this.
+
[39,48] Personally, as alluded to above, I now try to avoid loops of the form:
+
command_generating_data | while read -r var; do
+ do_things
+ done
+
That is because the loop body is run in a sub-process which cannot communicate back to the main script.
+
I would much rather write it as:
+
while read -r var; do
+ do_things
+ done < <(command_generating_data)
+
because the loop body can affect variables in the main script if needed.
+
+
In Ken’s script these are not serious issues since these are ShellCheck warnings and no loop needs to pass variables to the main script. However, as far as the loops are concerned, it would be all too easy to enhance them in the future to pass back values forgetting that it will not work. I have made this mistake myself on more than one occasion!
+
Specifics
+
I want to highlight a few instances which I would be strongly inclined to rewrite. I have referred to them by line within the youtube-rss.bash downloadable from show 2720.
This if command uses grep to search $logfile for $thisvideo and wants to know if there is (at least) one match, so the output is passed to wc to count the lines and the resulting number from the command substitution is checked to see if it is zero – i.e. there was no match.
+
However, since grep returns a true/false answer for cases such as this, the following would do the same but simpler:
+
if ! grep -q "${thisvideo}" "${logfile}"
+
This time grep performs the same test but its result is reversed by the '!'. The -q option tells it not to produce any output.
In the script shown in the notes for 2720 there’s a skipcrap variable, but this has been commented out, so ShellCheck objects to this of course. I re-enabled it for testing. In general though this if command is unnecessarily convoluted. It can be rewritten thus:
Rather than checking if the length of string skipcrap is zero then negating it, better to use the non-zero test from the start -n.
+
Since this -n test needs to be in [] or [[]] but the rest does not we organise things that way.
+
The second half of the test after && can be a pipeline since the result of the final part is what is considered in the test. In this case non-standard egrep is replaced by grep -E and the test is performed quietly with only the true/false result being of importance.
+
+
A further change might be to do away with echo "${title}" | grep -E -q -i "${skipcrap}" and use a Bash regular expression. Here grep is being used as a regular expression tool, comparing title with skipcrap. The latter is a regular expression itself (since that’s what grep -E needs), so everything is ready to go:
+
if [[ -n "${skipcrap}" && $title =~ $skipcrap ]]
+
This passes the ShellCheck test and when I run it from the command line it seems to work:
+
$ skipcrap="fail |react |live |Best Pets|BLOOPERS|Kids Try"
+$ title='The Best Pets in the World'
+$ if [[ -n "${skipcrap}" && $title =~ $skipcrap ]]; then echo "Crap"; else echo "Non-crap"; fi
+Crap
+
The downside is that the test is case-sensitive whereas the grep version used -i which made it case-insensitive. This can be overcome – in a slightly less elegant way perhaps – by forcing everything to lowercase for the test:
+
if [[ -n "${skipcrap}" && ${title,,} =~ ${skipcrap,,} ]]
Bash is powerful but does some things in an obscure way.
+
Shellcheck is both a very helpful tool, helping to catch scripting errors, but can also be something of an irritant when it nags about things. I am preparing another HPR episode about its use with Vim as well as the possible use of other checkers.
+
Apologies to Ken if it seemed as if I was making excessive criticisms of his scripts. What I intended was constructive criticism of course.
+
+
diff --git a/eps/hpr2736/hpr2736_vim_with_shellcheck.png b/eps/hpr2736/hpr2736_vim_with_shellcheck.png
new file mode 100755
index 0000000..573aa91
Binary files /dev/null and b/eps/hpr2736/hpr2736_vim_with_shellcheck.png differ
diff --git a/eps/hpr2736/hpr2736_vim_with_shellcheck_thumb.png b/eps/hpr2736/hpr2736_vim_with_shellcheck_thumb.png
new file mode 100755
index 0000000..e68f0a9
Binary files /dev/null and b/eps/hpr2736/hpr2736_vim_with_shellcheck_thumb.png differ
diff --git a/eps/hpr2739/hpr2739_bash19_ex1.sh b/eps/hpr2739/hpr2739_bash19_ex1.sh
new file mode 100755
index 0000000..5bcb68f
--- /dev/null
+++ b/eps/hpr2739/hpr2739_bash19_ex1.sh
@@ -0,0 +1,32 @@
+#!/bin/bash
+
+#-------------------------------------------------------------------------------
+# Example 1 for Bash Tips show 19: how an integer array works
+#-------------------------------------------------------------------------------
+
+#
+# Declare an integer array and a normal one
+#
+declare -a -i ints
+declare -a norm
+
+#
+# Load both with arithmetic expressions
+#
+ints=('38 % 7' '38 / 7')
+norm=('38 % 7' '38 / 7')
+
+#
+# Try storing a string in each of the arrays
+#
+ints+=('jellyfish')
+norm+=('jellyfish')
+
+#
+# Show the results
+#
+echo "ints: ${ints[*]}"
+echo "norm: ${norm[*]}"
+
+
+exit
diff --git a/eps/hpr2739/hpr2739_bash19_ex2.sh b/eps/hpr2739/hpr2739_bash19_ex2.sh
new file mode 100755
index 0000000..ba2869b
--- /dev/null
+++ b/eps/hpr2739/hpr2739_bash19_ex2.sh
@@ -0,0 +1,63 @@
+#!/bin/bash
+
+#-------------------------------------------------------------------------------
+# Example 2 for Bash Tips show 19: the mapfile/readarray command
+#-------------------------------------------------------------------------------
+
+#
+# Declare an indexed array
+#
+declare -a map
+
+#
+# Fill the array with a process substitution that generates 10 random numbers,
+# each followed by a newline (the default delimiter). We remove the newline
+# characters.
+#
+mapfile -t map < <(for i in {1..10}; do echo $RANDOM; done)
+
+#
+# Show the array as a list
+#
+echo "map: ${map[*]}"
+echo
+
+#
+# Declare a new indexed array
+#
+declare -a daffs
+
+#
+# Define a string with spaces replaced by underscores
+#
+words="I_wandered_lonely_as_a_Cloud_That_floats_on_high_o'er_vales_and_Hills,_When_all_at_once_I_saw_a_crowd,_A_host,_of_golden_Daffodils"
+
+#
+# Fill the array with a process substitution that provides the string. The
+# delimiter is '_' and we remove it as we load the array
+#
+mapfile -d _ -t daffs < <(echo -n "$words")
+
+#
+# Show the array as a list
+#
+echo "daffs: ${daffs[*]}"
+echo
+
+#
+# Fill an array with 100 random dictionary words. Use 'printf' as the callback
+# to report every 10th word using -C and -c
+#
+declare -a big
+mapfile -t -C "printf '%02d %s\n' " -c 10 big < <(grep -E -v "'s$" /usr/share/dict/words | shuf -n 100)
+echo
+
+#
+# Report every 10th element of the populated array in the same way
+#
+echo "big: ${#big[*]} elements"
+for ((i = 9; i < ${#big[*]}; i=i+10)); do
+ printf '%02d %s\n' "$i" "${big[$i]}"
+done
+
+exit
diff --git a/eps/hpr2739/hpr2739_bash19_ex3.sh b/eps/hpr2739/hpr2739_bash19_ex3.sh
new file mode 100755
index 0000000..881bdf6
--- /dev/null
+++ b/eps/hpr2739/hpr2739_bash19_ex3.sh
@@ -0,0 +1,24 @@
+#!/bin/bash
+
+#-------------------------------------------------------------------------------
+# Example 3 for Bash Tips show 19: using 'read' to fill an array
+#-------------------------------------------------------------------------------
+
+#
+# Create an indexed array
+#
+declare -a readtest
+
+#
+# Populate it with space separated words on one line using 'echo -n' to force
+# that to happen
+#
+read -r -a readtest < <(for c in {A..J}{1..3}; do echo -n "$c "; done)
+
+#
+# The result
+#
+echo "readtest: ${#readtest[*]} elements"
+echo "readtest: ${readtest[*]}"
+
+exit
diff --git a/eps/hpr2739/hpr2739_full_shownotes.html b/eps/hpr2739/hpr2739_full_shownotes.html
new file mode 100755
index 0000000..08e59e5
--- /dev/null
+++ b/eps/hpr2739/hpr2739_full_shownotes.html
@@ -0,0 +1,347 @@
+
+
+
+
+
+
+
+ Bash Tips - 19 (HPR Show 2739)
+
+
+
+
+
+
+
+
+
Bash Tips - 19 (HPR Show 2739)
+
Arrays in Bash (part 4)
+
Dave Morriss
+
+
+
+
+
+
+
Table of Contents
+
+
+
Arrays in Bash
+
This is the fourth and last of a small group of shows on the subject of arrays in Bash. It is also the nineteenth show in the Bash Tips sub-series.
+
In the last show we continued with the subject of parameter expansion in the context of arrays. There are other aspects of this that could be looked at, but we’ll leave it for the moment and may revisit it in the future.
+
In this episode we will look in more depth at the declare (typeset) built in command and at some commands that are related (readonly and local), We will also look at the commands that assist with loading data into arrays: mapfile (readarray) and read.
+
The declare (typeset) command in more detail
+
The 'declare' command is a Bash builtin used to declare variables and give them attributes. This includes arrays, as we have seen.
+
The command 'typeset' is a synonym for 'declare' supplied for compatibility with the Korn shell.
+
We will look at some of the options to 'declare' but will restrict ourselves largely to those relevant to arrays. All of the options (with the exception of '-r' and '-a') are turned on by starting with a '-' and turned off with '+' (slightly confusingly).
+
The '-i' option
+
This makes the variable behave as an integer. Arithmetic evaluation is performed when the variable is assigned a value (as if the assignment is inside '(())').
+
I have included a downloadable script bash19_ex1.sh which demonstrates what an array declared in this way can do:
+
#!/bin/bash
+
+#-------------------------------------------------------------------------------
+# Example 1 for Bash Tips show 19: how an integer array works
+#-------------------------------------------------------------------------------
+
+#
+# Declare an integer array and a normal one
+#
+declare -a -i ints
+declare -a norm
+
+#
+# Load both with arithmetic expressions
+#
+ints=('38 % 7' '38 / 7')
+norm=('38 % 7' '38 / 7')
+
+#
+# Try storing a string in each of the arrays
+#
+ints+=('jellyfish')
+norm+=('jellyfish')
+
+#
+# Show the results
+#
+echo "ints: ${ints[*]}"
+echo "norm: ${norm[*]}"
+
+
+exit
+
Running the script generates the following output:
+
ints: 3 5 0
+norm: 38 % 7 38 / 7 jellyfish
+
+
The integer array stored the results of the expressions, but treated the string as zero. The other array stored the same expressions as strings.
+
The '-l' and '-u' options
+
These options make the variables force a lower case ('-l') or upper case ('-u') conversion on whatever is assigned to them. Only one can be set at a time of course!
+
The '-r' option
+
This option make the names being declared readonly. This means that they must be initialised at creation time and cannot then be assigned further values, and cannot be turned off. This is a way of creating constants in Bash:
This command is equivalent to 'declare -r' discussed above. It takes the options '-a', '-A', '-f' and '-p' only, and they have the same meaning as they do in 'declare' (we haven’t looked at '-f' yet though). The command comes from the Bourne shell.
+
The local command
+
The 'local' command takes the same options and types of arguments as 'declare' but is only for use inside functions where it creates variables local to the function (invisible outside it). We will look at it in more detail in a later episode when we deal with functions in Bash.
+
The mapfile (readarray) command
+
This command reads lines from standard input into an indexed array. It can also read from a file descriptor, a subject we have not looked at yet.
+
The 'readarray' command is a synonym for 'mapfile'.
The first character of delim is used to terminate each input line, rather than newline.
+
+
+
-ncount
+
Read a maximum of count lines. If count is zero, all available lines are copied.
+
+
+
-Oorigin
+
Begin writing lines to array at index number origin. The default value is zero.
+
+
+
-scount
+
Discard the first count lines before writing to array.
+
+
+
-t
+
Remove a trailing delim (default newline) from each line read.
+
+
+
-ufd
+
Read lines from file descriptor fd rather than standard input.
+
+
+
-Ccallback
+
Execute/evaluate a function/expression, callback, every time quantum lines are read. The -c option specifies quantum.
+
+
+
-cquantum
+
Specify the number of lines, quantum, after which function/expression callback should be executed/evaluated if specified with -C. Default is 5000.
+
+
+
array
+
The name of the array variable where lines should be written. If the array argument is omitted the data is loaded into an array called 'MAPFILE'.
+
+
+
+
When callback is evaluated, it is supplied the index of the next array element to be assigned and the line to be assigned to that element as additional arguments. The callback function or expression is evaluated after the line is read but before the array element is assigned.
+
If not supplied with an explicit origin, 'mapfile' will clear array before assigning to it.
+
I have included a downloadable script bash19_ex2.sh which demonstrates some uses of 'mapfile':
+
#!/bin/bash
+
+#-------------------------------------------------------------------------------
+# Example 2 for Bash Tips show 19: the mapfile/readarray command
+#-------------------------------------------------------------------------------
+
+#
+# Declare an indexed array
+#
+declare -a map
+
+#
+# Fill the array with a process substitution that generates 10 random numbers,
+# each followed by a newline (the default delimiter). We remove the newline
+# characters.
+#
+mapfile -t map < <(for i in {1..10}; do echo $RANDOM; done)
+
+#
+# Show the array as a list
+#
+echo "map: ${map[*]}"
+echo
+
+#
+# Declare a new indexed array
+#
+declare -a daffs
+
+#
+# Define a string with spaces replaced by underscores
+#
+words="I_wandered_lonely_as_a_Cloud_That_floats_on_high_o'er_vales_and_Hills,_When_all_at_once_I_saw_a_crowd,_A_host,_of_golden_Daffodils"
+
+#
+# Fill the array with a process substitution that provides the string. The
+# delimiter is '_' and we remove it as we load the array
+#
+mapfile -d _ -t daffs < <(echo -n "$words")
+
+#
+# Show the array as a list
+#
+echo "daffs: ${daffs[*]}"
+echo
+
+#
+# Fill an array with 100 random dictionary words. Use 'printf' as the callback
+# to report every 10th word using -C and -c
+#
+declare -a big
+mapfile -t -C "printf '%02d %s\n' " -c 10 big < <(grep -E -v "'s$" /usr/share/dict/words | shuf -n 100)
+echo
+
+#
+# Report every 10th element of the populated array in the same way
+#
+echo "big: ${#big[*]} elements"
+for ((i = 9; i < ${#big[*]}; i=i+10)); do
+ printf '%02d %s\n' "$i" "${big[$i]}"
+done
+
+exit
+
Running the script generates the following output:
+
map: 9405 5502 13323 16242 31013 5921 10529 28866 32759 24391
+
+daffs: I wandered lonely as a Cloud That floats on high o'er vales and Hills, When all at once I saw a crowd, A host, of golden Daffodils
+
+09 Malaysians
+19 kissers
+29 wastewater
+39 diddles
+49 Brahmagupta
+59 dissociated
+69 healthy
+79 amortize
+89 unsure
+99 interbreeding
+
+big: 100 elements
+09 Malaysians
+19 kissers
+29 wastewater
+39 diddles
+49 Brahmagupta
+59 dissociated
+69 healthy
+79 amortize
+89 unsure
+99 interbreeding
+
+
Using the read command to fill an array
+
This command, which we have seen on many occasions before (but haven’t yet examined in detail) takes an option '-a name' which loads an indexed array with words.
+
Because 'read' reads one line at a time the words to be placed in the array must all be on one line. The line is split into words using the word splitting process described in episode 2045.
+
I have included a downloadable script bash19_ex3.sh which demonstrates a use of 'read' with the option '-a name':
+
#!/bin/bash
+
+#-------------------------------------------------------------------------------
+# Example 3 for Bash Tips show 19: using 'read' to fill an array
+#-------------------------------------------------------------------------------
+
+#
+# Create an indexed array
+#
+declare -a readtest
+
+#
+# Populate it with space separated words on one line using 'echo -n' to force
+# that to happen
+#
+read -r -a readtest < <(for c in {A..J}{1..3}; do echo -n "$c "; done)
+
+#
+# The result
+#
+echo "readtest: ${#readtest[*]} elements"
+echo "readtest: ${readtest[*]}"
+
+exit
+
Running the script generates the following output:
Note: As I was recording the audio for this episode I suddenly realised that the way the data is being generated in this script is unnecessarily complex. I used:
+
read -r -a readtest < <(for c in {A..J}{1..3}; do echo -n "$c "; done)
+
+
+
diff --git a/eps/hpr2751/hpr2751_full_shownotes.html b/eps/hpr2751/hpr2751_full_shownotes.html
new file mode 100755
index 0000000..64d6da3
--- /dev/null
+++ b/eps/hpr2751/hpr2751_full_shownotes.html
@@ -0,0 +1,299 @@
+
+
+
+
+
+
+
+ Battling with English - part 3 (HPR Show 2751)
+
+
+
+
+
+
+
+
+
Battling with English - part 3 (HPR Show 2751)
+
Dave Morriss
+
+
+
+
+
+
+
Table of Contents
+
+
+
Some word confusions
+
In this episode, the third of this series, I’m looking at some words that are sometimes used in the wrong places, often being confused one with another. These words are often particularly difficult to differentiate by people for whom English is not their first language.
+
Confusing been and being
+
These two words often sound similar, though, as you see, they have different spelling. They are often confused though, particularly by people learning English.
example 1: “Hacker Public Radio came into being at the end of 2007.”
+
+
example 2: “‘The Unbearable Lightness of Being’ is a 1984 novel by Milan Kundera”1
+
+
example 3: “Some of my readers may have an interest in being informed whether or no any portions of the Marshalsea Prison are yet standing.”2
+
+
+
+
meaning 3:the nature or essence of a person
+
+
example 1: “My father was the business brains behind it and this affected every fibre of his being.”
+
+
example 2: “I oppose the reinstatement of the death penalty with every fibre of my being.”
+
+
+
+
meaning 4:(noun) a real or imaginary living creature or entity, especially an intelligent one
+
+
example 1: “It is also a matter of how all living beings, not just human beings, live side by side.”
+
+
example 2: “The motif of alien beings peopling our planet is a very common one in science fiction.”
+
+
+
+
+
Example of what you should never write
+
+
It ended up been a waste of money. ✖
+
+
The correct word to use rather than “been” is “being”.
+
+
It ended up being a waste of money. ✔
+
+
The meaning here is that “it was a waste of money” or “looking back it has been a waste of money” (note the use of “has been”). The form of “being” here would be the present participle of the verb to be (meaning 1 above), whereas “been” is the past participle.
+
+
Confusing weather, wether, whether, wither and whither
+
The words weather, wether, and whether sound the same though their spellings are different, but mean very different things. The similar words wither and whither can also be confused with each other and with the previous group but mean different things.
meaning 1:(interrogative adverb) to what place or state (literary, archaic)
+
+
example 1: “Whither are we bound?’”
+
+
+
+
meaning 2:(relative adverb) to which - with reference to a place (literary, archaic)
+
+
example 1: “One finds oneself walking mechanically to the tower of Belvedere Castle whither all other park visitors have gravitated like the ghouls in ‘Night of the Living Dead’”
+
+
+
+
+
Examples of what you should never write
+
Example 1
+
+
Lovely wither we’re having! ✖
+
+
Somebody who enjoys shrivelling? Should have been:
+
+
Lovely weather we’re having! ✔
+
+
Example 2
+
+
DuckDuckGo discussed and wether it personalizes searches ✖
+
+
From the notes for HPR show 1416. The corrected version would read:
+
+
DuckDuckGo discussed and whether it personalizes searches ✔
+
+
Example 3
+
+
…you don’t have to worry about whither you check your feeds on a desktop PC or on your phone. ✖
+
+
From the notes for an HPR show; the (archaic) whither should have been whether.
From the preface to the 1857 edition of “Little Dorrit” by Charles Dickens↩
+
+
+
+
+
+
+
diff --git a/eps/hpr2756/hpr2756_bash20_ex1.sh b/eps/hpr2756/hpr2756_bash20_ex1.sh
new file mode 100755
index 0000000..406eb28
--- /dev/null
+++ b/eps/hpr2756/hpr2756_bash20_ex1.sh
@@ -0,0 +1,91 @@
+#!/bin/bash
+
+#-------------------------------------------------------------------------------
+# Example 1 for Bash Tips show 20: deleting individual array elements
+#-------------------------------------------------------------------------------
+
+#
+# Seed the random number generator with a nanosecond number
+#
+RANDOM=$(date +%N)
+
+echo "Indexed array"
+echo "-------------"
+
+#
+# Create indexed array and populate with ad ae ... cf
+#
+declare -a iarr
+mapfile -t iarr < <(printf '%s\n' {a..c}{d..f})
+
+#
+# Report element count and show the structure
+#
+echo "Length: ${#iarr[*]}"
+declare -p iarr
+
+#
+# Unset a random element
+#
+ind=$((RANDOM % ${#iarr[*]}))
+echo "Element $ind to be removed, contents: ${iarr[$ind]}"
+unset "iarr[$ind]"
+
+#
+# Report on the result of the element removal
+#
+echo "Length: ${#iarr[*]}"
+declare -p iarr
+
+echo
+echo "Associative array"
+echo "-----------------"
+
+#
+# Create associative array. Populate with the indices from the indexed array
+# using the array contents as the subscripts.
+#
+declare -A aarr
+for (( i = 0; i <= ${#iarr[*]}; i++ )); do
+ # If there's a "hole" in iarr don't create an element
+ [[ -v iarr[$i] ]] && aarr[${iarr[$i]}]=$i
+done
+
+#
+# Report element count and keys
+#
+echo "Length: ${#aarr[*]}"
+echo "Keys: ${!aarr[*]}"
+
+#
+# Use a loop to report array contents in sorted order
+#
+for key in $(echo "${iarr[@]}" | sort); do
+ echo "aarr[$key]=${aarr[$key]}"
+done
+
+#
+# Make another contiguous indexed array of the associative array's keys. We
+# don't care about their order
+#
+declare -a keys
+mapfile -t keys < <(printf '%s\n' ${!aarr[*]})
+
+#
+# Unset a random element. The indexed array 'keys' contains the keys
+# of the associative array so we use the selected one as a subscript. We use
+# this array because it doesn't have any "holes". If we'd used 'iarr' we might
+# have hit the "hole" we created earlier!
+#
+k=$((RANDOM % ${#keys[*]}))
+echo "Element '${keys[$k]}' to be removed, contents: ${aarr[${keys[$k]}]}"
+unset "aarr[${keys[$k]}]"
+
+#
+# Report final element count and keys
+#
+echo "Length: ${#aarr[*]}"
+echo "Keys: ${!aarr[*]}"
+declare -p aarr
+
+exit
diff --git a/eps/hpr2756/hpr2756_bash20_ex2.sh b/eps/hpr2756/hpr2756_bash20_ex2.sh
new file mode 100755
index 0000000..7962bef
--- /dev/null
+++ b/eps/hpr2756/hpr2756_bash20_ex2.sh
@@ -0,0 +1,26 @@
+#!/bin/bash
+
+#-------------------------------------------------------------------------------
+# Example 2 for Bash Tips show 20: simple use of positional parameters
+#-------------------------------------------------------------------------------
+
+#
+# This script needs 2 arguments
+#
+if [[ $# -ne 2 ]]; then
+ echo "Usage: $0 word count"
+ exit 1
+fi
+
+word=$1
+count=$2
+
+#
+# Repeat the 'word' 'count' times on the same line
+#
+for (( i = 1; i <= count; i++ )); do
+ echo -n "$word"
+done
+echo
+
+exit
diff --git a/eps/hpr2756/hpr2756_full_shownotes.html b/eps/hpr2756/hpr2756_full_shownotes.html
new file mode 100755
index 0000000..503a51a
--- /dev/null
+++ b/eps/hpr2756/hpr2756_full_shownotes.html
@@ -0,0 +1,410 @@
+
+
+
+
+
+
+
+ Bash Tips - 20 (HPR Show 2756)
+
+
+
+
+
+
+
+
+
Bash Tips - 20 (HPR Show 2756)
+
Deleting arrays; positional and special parameters in Bash
+
Dave Morriss
+
+
+
+
+
+
+
Table of Contents
+
+
+
Tidying loose ends (Some collateral Bash tips)
+
Deleting arrays
+
I forgot to cover one thing on my list when doing the last show: I forgot to explain how to delete arrays and array elements. I’ll cover that topic in this episode.
+
Positional and Special parameters
+
I have also avoided talking much about the positional and special parameters in Bash: '$1', '$2', '$#' and the rest. I will cover (some of) these in this episode.
+
Silly titles
+
I stopped doing the weird episode titles by episode 14 because I thought the joke was getting tired. However, I think a few people missed them (and a certain HPR colleague was found vandalising my new titles as they were being posted ;-), so I have added them inside the notes on the older shows and am adding one here – as a homage to silliness.
+
The unset command
+
This is a built-in command that originated from the Bourne shell. It removes variables, arrays, parts of arrays or functions.
+
The command syntax is (from the GNU Bash manual):
+
unset [-fnv] [name]
+
The unset command removes each variable or function represented by name. This is just the name of the thing to be deleted and does not take a dollar sign ('$'). If the variable or function does not exist then this does not cause an error.
+
The '-v' option
+
If this option is given each name refers to a shell variable, which is removed.
+
$ fruit='rambutan'
+$ echo "fruit is $fruit"
+fruit is rambutan
+$ unset -v fruit
+$ echo "fruit is $fruit"
+fruit is
+
A variable unset in this way will have been completely removed. This is not the same as setting the variable to null:
+
$ fruit='mangosteen'
+$ if [[ -v fruit ]]; then echo "Exists"; else echo "Doesn't exist"; fi
+Exists
+$ fruit=
+$ if [[ -v fruit ]]; then echo "Exists"; else echo "Doesn't exist"; fi
+Exists
+$ unset -v fruit
+$ if [[ -v fruit ]]; then echo "Exists"; else echo "Doesn't exist"; fi
+Doesn't exist
+
Remember that the Bash conditional expression '-v varname' returns true if the shell variable varname is set (has been assigned a value). Being null simply means that the variable has a null value, but it still exists.
+
The '-f' option
+
If this option is given then each name refers to a shell function, which is removed. Although there’s not much more to say, we’ll look at this in a little more detail when we cover functions in a formal way in a later episode.
+
Note that if no option is given to 'unset' each name is first checked to see if it is a variable, and if it is it is removed. If not and name is a function then it is removed. This could be unfortunate if you have variables and functions with similar names and you mistype a variable name.
+
The POSIX definition states that functions can only be removed if the '-f' option is given.
+
The '-n' option
+
This option is for removing variables with the nameref option set. We will look at such variables in a later show and will go into more detail about unsetting them then.
+
Variables marked as readonly
+
These cannot be unset. We touched on this in episode 19 with the 'declare -r' and 'readonly' commands.
+
Using a dollar sign in front of the variable name
+
The issue of whether the dollar sign is used or not is important. Consider the following:
+
$ a='b'
+$ b='Contents of variable b'
+$ echo "a=$a b=$b"
+a=b b=Contents of variable b
+$ unset $a # <--- Don't do this!
+$ echo "a=$a b=$b"
+a=b b=
+
Here the variable 'b' has been removed where (presumably) the intention was to remove variable 'a'!
+
Arrays and array elements
+
Entire arrays can be removed with one of the following:
+
unset array
+unset array[*]
+unset array[@]
+
Note again that the array name is not preceded by a dollar sign ('$').
+
Individual elements may be removed as follows:
+
unset array[subscript]
+
As expected, the subscript must be numeric (or an expression returning a number) for indexed arrays. For associative arrays the subscript is a string (or an expression returning a string). Care is needed to quote appropriately if the subscript string contains spaces.
+
An index for an indexed array can be negative, as discussed in earlier shows, in which case the element in question is relative to the end of the array.
+
Note that ShellCheck, the script checking tool, advises that when subscripted arrays are used with unset they be quoted to avoid problems with glob expansion. The examples in this episode do this.
+
I have included a downloadable script bash20_ex1.sh which demonstrates array element deletion for both types of array:
+
#!/bin/bash
+
+#-------------------------------------------------------------------------------
+# Example 1 for Bash Tips show 20: deleting individual array elements
+#-------------------------------------------------------------------------------
+
+#
+# Seed the random number generator with a nanosecond number
+#
+RANDOM=$(date +%N)
+
+echo "Indexed array"
+echo "-------------"
+
+#
+# Create indexed array and populate with ad ae ... cf
+#
+declare -a iarr
+mapfile -t iarr < <(printf '%s\n' {a..c}{d..f})
+
+#
+# Report element count and show the structure
+#
+echo "Length: ${#iarr[*]}"
+declare -p iarr
+
+#
+# Unset a random element
+#
+ind=$((RANDOM % ${#iarr[*]}))
+echo "Element $ind to be removed, contents: ${iarr[$ind]}"
+unset "iarr[$ind]"
+
+#
+# Report on the result of the element removal
+#
+echo "Length: ${#iarr[*]}"
+declare -p iarr
+
+echo
+echo "Associative array"
+echo "-----------------"
+
+#
+# Create associative array. Populate with the indices from the indexed array
+# using the array contents as the subscripts.
+#
+declare -A aarr
+for (( i = 0; i <= ${#iarr[*]}; i++ )); do
+ # If there's a "hole" in iarr don't create an element
+ [[ -v iarr[$i] ]] && aarr[${iarr[$i]}]=$i
+done
+
+#
+# Report element count and keys
+#
+echo "Length: ${#aarr[*]}"
+echo "Keys: ${!aarr[*]}"
+
+#
+# Use a loop to report array contents in sorted order
+#
+for key in $(echo "${iarr[@]}" | sort); do
+ echo "aarr[$key]=${aarr[$key]}"
+done
+
+#
+# Make another contiguous indexed array of the associative array's keys. We
+# don't care about their order
+#
+declare -a keys
+mapfile -t keys < <(printf '%s\n' ${!aarr[*]})
+
+#
+# Unset a random element. The indexed array 'keys' contains the keys
+# of the associative array so we use the selected one as a subscript. We use
+# this array because it doesn't have any "holes". If we'd used 'iarr' we might
+# have hit the "hole" we created earlier!
+#
+k=$((RANDOM % ${#keys[*]}))
+echo "Element '${keys[$k]}' to be removed, contents: ${aarr[${keys[$k]}]}"
+unset "aarr[${keys[$k]}]"
+
+#
+# Report final element count and keys
+#
+echo "Length: ${#aarr[*]}"
+echo "Keys: ${!aarr[*]}"
+declare -p aarr
+
+exit
+
Running the script generates the following output:
+
Indexed array
+-------------
+Length: 9
+declare -a iarr=([0]="ad" [1]="ae" [2]="af" [3]="bd" [4]="be" [5]="bf" [6]="cd" [7]="ce" [8]="cf")
+Element 5 to be removed, contents: bf
+Length: 8
+declare -a iarr=([0]="ad" [1]="ae" [2]="af" [3]="bd" [4]="be" [6]="cd" [7]="ce" [8]="cf")
+
+Associative array
+-----------------
+Length: 8
+Keys: be bd af ad ae cd ce cf
+aarr[ad]=0
+aarr[ae]=1
+aarr[af]=2
+aarr[bd]=3
+aarr[be]=4
+aarr[cd]=6
+aarr[ce]=7
+aarr[cf]=8
+Element 'ae' to be removed, contents: 1
+Length: 7
+Keys: be bd af ad cd ce cf
+declare -A aarr=([be]="4" [bd]="3" [af]="2" [ad]="0" [cd]="6" [ce]="7" [cf]="8" )
+
+
I have included a lot of comments in the script to explain what it is doing.
+
Things to note are:
+
+
The indexed array 'iarr' is filled with character pairs in order, and the indices generated are a contiguous sequence. Once an element is unset a “hole” is created in the sequence.
+
The associative array 'aarr' has a random element deleted. The element is selected by using the indexed array 'keys' which is created from the keys themselves. We use this rather than 'iarr' so that the random number can’t match the “hole” we created earlier.
+
+
+
Positional Parameters
+
The positional parameters are “the shell’s command-line arguments” - to quote the GNU Bash Manual.
+
We have seen these parameters in various contexts in other shows in the Bash Scripting series. They are denoted by numbers 1 onwards, as in '$1'. It is possible to set these parameters at the time the shell is invoked, but it is more common to see them being used in scripts. When a shell script is invoked any positional parameters are temporarily replaced by the arguments to the script, and the same occurs when executing a shell function.
+
The positional parameters are referred to as '$N' or '${N}' where N is a number starting from 1. The '${N}'must be used if N is 10 or above. These numbers denote the position of the argument of course. The positional parameters cannot be set in assignment statements, so '1=42' is illegal.
+
The set command
+
This command (which is hugely complex!) allows the positional parameters to be cleared or redefined (amongst many other capabilities).
+
set -- [argument...]
+
This clears the positional parameters and, if there are arguments provided, places them into the parameters.
+
set - [argument...]
+
If there are arguments then they replace the positional parameters. If no arguments are given then the positional parameters remain unchanged.
+
The shift command
+
This is a Bourne shell builtin command. Its job is to shift the positional parameters to the left.
+
shift [n]
+
If n is omitted it is assumed to be 1. The positional parameters from n+1 onwards are renamed to $1 onwards. So, if the positional parameters are:
+
Haggis Neeps Tatties
+
The command 'shift 2' will result in them being:
+
Tatties
+
It is an error to shift more places than there are parameters, and n cannot be negative. The 'shift' command returns a non-zero status if there is an error, but the positional parameters are not changed.
+
Special Parameters
+
A number of these are related to the positional parameters, and we will concentrate on these at the moment. The GNU Bash manual contains more detail than is shown here. Look at the manual for the full information if needed.
+
+
+
+
+
+
+
+
Parameter
+
Explanation
+
+
+
+
+
*
+
($*) Expands to the positional parameters, starting from one. When the expansion is not within double quotes, each positional parameter expands to a separate word. When in double quotes a string is formed containing the positional parameters separated by the IFS delimiter. This is similar to the expression "${array[*]}" which we have seen before.
+
+
+
@
+
($@) Expands to the positional parameters, starting from one. If the expression is within double quotes the parameters form quoted words. This is similar to the expression "${array[@]}" which we have seen before.
+
+
+
#
+
($#) Expands to the number of positional parameters in decimal.
+
+
+
0
+
($0) Expands to the name of the shell or shell script.
+
+
+
+
Examples
+
Creating a Bash shell with arguments
+
The following snippets show how Bash can be invoked with arguments and how they can be manipulated:
+
$ /bin/bash -s Hacker Public Radio
+$ echo $0
+/bin/bash
+$ echo $#
+3
+$ echo $@
+Hacker Public Radio
+
Here /bin/bash is invoked with the '-s' option which causes it to run interactively and allows arguments. Inside the shell '$0' contains the file used to invoke Bash. There are three positional parameters, and these are displayed
+
$ set - $@ is cool
+$ echo $#
+5
+$ echo $@
+Hacker Public Radio is cool
+
The 'set' command is used to change the positional parameters to themselves ('$@') with two more. These make the count 5, then the new parameters are displayed.
+
$ shift 2
+$ echo $@
+Radio is cool
+
This shows the use of the 'shift' command to move the parameters two places to the left, thereby removing the first two of them.
A very simple script in /tmp/test is shown which displays the count of its arguments and the arguments themselves. It is invoked with three arguments which temporarily become the positional parameters of the script. The shell’s positional parameters are still intact afterwards though.
+
The 'exit' terminates the shell which was created at the start of these snippets.
+
Downloadable script
+
Here is a very simple downloadable script bash20_ex2.sh which demonstrates the use of arguments:
+
#!/bin/bash
+
+#-------------------------------------------------------------------------------
+# Example 2 for Bash Tips show 20: simple use of positional parameters
+#-------------------------------------------------------------------------------
+
+#
+# This script needs 2 arguments
+#
+if [[ $# -ne 2 ]]; then
+ echo "Usage: $0 word count"
+ exit 1
+fi
+
+word=$1
+count=$2
+
+#
+# Repeat the 'word' 'count' times on the same line
+#
+for (( i = 1; i <= count; i++ )); do
+ echo -n "$word"
+done
+echo
+
+exit
+
Running the script as './bash20_ex2.sh goodbye 3' generates the following output:
+
+
diff --git a/eps/hpr2816/hpr2816_awk14_ex1.awk b/eps/hpr2816/hpr2816_awk14_ex1.awk
new file mode 100755
index 0000000..088129a
--- /dev/null
+++ b/eps/hpr2816/hpr2816_awk14_ex1.awk
@@ -0,0 +1,10 @@
+#!/usr/bin/awk -f
+
+# Downloadable example 1 for GNU Awk Part 14
+
+NR > 1 {
+ colour = $2
+ fname = "awk14_" colour "_fruit"
+ printf "Writing %s to %s\n",$1,fname
+ print $1 > fname
+}
diff --git a/eps/hpr2816/hpr2816_awk14_ex2.awk b/eps/hpr2816/hpr2816_awk14_ex2.awk
new file mode 100755
index 0000000..25a2fee
--- /dev/null
+++ b/eps/hpr2816/hpr2816_awk14_ex2.awk
@@ -0,0 +1,15 @@
+#!/usr/bin/awk -f
+
+# Downloadable example 2 for GNU Awk Part 14
+
+BEGIN {
+ cmd = "sort -u | nl"
+}
+
+NR > 1 {
+ print $1 | cmd
+}
+
+END {
+ close(cmd)
+}
diff --git a/eps/hpr2816/hpr2816_awk14_ex3.awk b/eps/hpr2816/hpr2816_awk14_ex3.awk
new file mode 100755
index 0000000..b71b014
--- /dev/null
+++ b/eps/hpr2816/hpr2816_awk14_ex3.awk
@@ -0,0 +1,23 @@
+#!/usr/bin/awk -f
+
+# Downloadable example 3 for GNU Awk Part 14
+
+{
+ # Split the path up into components
+ n = split($0,a,"/")
+ if (n < 2) {
+ print "Error in path",$0 > "/dev/stderr"
+ next
+ }
+
+ # Build the shell command so we can show it
+ cmd = sprintf("[ -e %s ] && ln -s -f %s %s",$0,$0,a[n])
+ print ">> " cmd
+
+ # Feed the command to the shell
+ printf("%s\n",cmd) | "sh"
+}
+
+END {
+ close("sh")
+}
diff --git a/eps/hpr2816/hpr2816_awk14_fruit_data.txt b/eps/hpr2816/hpr2816_awk14_fruit_data.txt
new file mode 100755
index 0000000..5f719d6
--- /dev/null
+++ b/eps/hpr2816/hpr2816_awk14_fruit_data.txt
@@ -0,0 +1,10 @@
+name color amount
+apple red 4
+banana yellow 6
+strawberry red 3
+grape purple 10
+apple green 8
+plum purple 2
+kiwi brown 4
+potato brown 9
+pineapple yellow 5
diff --git a/eps/hpr2816/hpr2816_full_shownotes.epub b/eps/hpr2816/hpr2816_full_shownotes.epub
new file mode 100755
index 0000000..70dbc50
Binary files /dev/null and b/eps/hpr2816/hpr2816_full_shownotes.epub differ
diff --git a/eps/hpr2816/hpr2816_full_shownotes.html b/eps/hpr2816/hpr2816_full_shownotes.html
new file mode 100755
index 0000000..ccc7cbb
--- /dev/null
+++ b/eps/hpr2816/hpr2816_full_shownotes.html
@@ -0,0 +1,262 @@
+
+
+
+
+
+
+
+ Gnu Awk - Part 14 (HPR Show 2816)
+
+
+
+
+
+
+
+
+
Gnu Awk - Part 14 (HPR Show 2816)
+
Redirection of input and output - part 1
+
Dave Morriss
+
+
+
+
+
+
+
Table of Contents
+
+
+
Introduction
+
This is the fourteenth episode of the “Learning Awk” series which is being produced by b-yeezi and myself.
+
In this episode and the next I want to start looking at redirection within Awk programs. I had originally intended to cover the subject in one episode, but there is just too much.
+
So, in the first episode I will be starting with output redirection and then in the next episode will spend some time looking at the getline command used for explicit input, often with redirection.
+
Redirection of output
+
So far we have seen that when an awk script uses print or printf the output is written to the standard output (the screen in most cases). The redirection feature in awk allows output to be written elsewhere.
+
How this is achieved is described in the following sections.
+
Redirecting to a file
+
print items > output-file
+printf format, items > output-file
+
Here, 'items' denotes the items to be printed, 'format' is the format expression for 'printf', 'output-file' is an expression which is converted to a string and contains the name of the output file.
+
Here’s a simple example. It uses the file of fruit data introduced in episode number 2. This data file is included with this show (awk14_fruit_data.txt):
Here the script skips the first line of headers, then prints out the fruit name in field 1 to the file called 'fruit_names'. Notice the file name is enclosed in quotes because it is a string.
+
The script will loop once per line of the input file executing the redirection each time. However the file contains all of the names in the same order as the input file. This is because of the following behaviour:
+
+
The output file is erased before the first output is written to it.
+
Subsequent writes to the same file do not erase it but append to it.
+
+
It is important to be aware that redirection in Awk is similar to but not the same as that in shell scripts.
+
What we have done here is not really different from running the following command where the shell deals with redirection:
Here Awk is writing to the standard output stream and the shell is capturing this stream and redirecting it to a file. However, things get more complex if the requirement is to write to more than one file from a script.
+
The following downloadable script (awk14_ex1.awk) writes to a collection of output files:
+
$ cat awk14_ex1.awk
+#!/usr/bin/awk -f
+
+# Downloadable example 1 for GNU Awk Part 14
+
+NR > 1 {
+ colour = $2
+ fname = "awk14_" colour "_fruit"
+ printf "Writing %s to %s\n",$1,fname
+ print $1 > fname
+}
+
Running the script writes to files called 'awk14_brown_fruit' and similar in the current directory:
+
$ ./awk14_ex1.awk awk14_fruit_data.txt
+Writing apple to awk14_red_fruit
+Writing banana to awk14_yellow_fruit
+Writing strawberry to awk14_red_fruit
+Writing grape to awk14_purple_fruit
+Writing apple to awk14_green_fruit
+Writing plum to awk14_purple_fruit
+Writing kiwi to awk14_brown_fruit
+Writing potato to awk14_brown_fruit
+Writing pineapple to awk14_yellow_fruit
+
The script announces what it’s doing, which is a little superfluous but helps to visualise what’s going on.
+
Notice that since the output file names are generated dynamically and are liable to change between each line read from the input file the script is doing what was described earlier – creating them (or emptying them if they already exist) and then appending to them once open. All the files are closed when the script exits of course.
+
The files created are shown below and the contents of one displayed:
The next type of redirection uses two greater than signs:
+
print items >> output-file
+printf format, items >> output-file
+
In this case the output file is expected to exist already. If it does then its contents are not erased but are appended to. If the file does not exist then it is created and written to as before.
+
When redirecting to a file in a shell script it’s common to see something like this:
The use of '>>' in the second case is necessary because otherwise the file would have been cleared out before the message was written. Each redirection like this in Bash involves opening and closing the output file.
+
In an awk script on the other hand – as we have seen – the file is kept open by the script until it is closed on exit. There is a 'close' command which will do this explicitly, and we will look at this shortly.
+
Redirecting to another program
+
This type of redirection uses a pipe symbol to send output to a string containing a command (or commands) for the shell.
+
print items | command
+printf format, items | command
+
The following example shows the fruit names being written to a pair of commands in a shell pipeline:
The names are sorted using the 'sort' command, requesting that the results be made unique ('-u'). The output from the sort is run through 'nl' which numbers the lines.
+
As the awk script is run, a sub-process is executed with the two commands. The first name is then sent to this process, and this repeats with each successive name. The sub-process finishes when the script finishes.
+
In this case the 'sort' command will have accumulated all the names, then on the connection being terminated it will perform the sort and pass the results to 'nl'.
+
There is a 'close' command in awk which will close the redirection to the command(s) or to a file. The argument to 'close' needs to be the exact command(s) which define the process (or the exact file name). For this reason it’s a good idea to store the commands or file name in an awk variable.
+
The following downloadable script (awk14_ex2.awk) shows the variable 'cmd' being used to hold the shell commands. The connection is closed to show how it would be done, though there is no actual need to do so here.
Here’s a more real world example (at least it’s real in my world). When I’m preparing an HPR show like this which involves a number of example scripts I need to run them for testing purposes. I have a main directory for HPR shows and a sub-directory per show. I like to make soft links to the examples in this sub-directory so I can run tests without hopping about between directories.
+
In general I make links in this way:
+
ln -s -f PathToExample BasenameOfExample
+
I wrote an Awk script to help me which takes path names as input and constructs shell commands which it pipes into 'sh'.
+
The following downloadable script (awk14_ex3.awk) shows the process.
+
$ cat awk14_ex3.awk
+#!/usr/bin/awk -f
+
+# Downloadable example 3 for GNU Awk Part 14
+
+{
+ # Split the path up into components
+ n = split($0,a,"/")
+ if (n < 2) {
+ print "Error in path",$0 > "/dev/stderr"
+ next
+ }
+
+ # Build the shell command so we can show it
+ cmd = sprintf("[ -e %s ] && ln -s -f %s %s",$0,$0,a[n])
+ print ">> " cmd
+
+ # Feed the command to the shell
+ printf("%s\n",cmd) | "sh"
+}
+
+END {
+ close("sh")
+}
+
The script expects to be given one or more pathnames on standard input. It first takes the path and splits it up based on the '/' character. Since 'split' returns the number of elements then that number will index the last element. We check that it’s sensible before proceeding. Note that the error message generated by the 'if' test is redirected to '/dev/stderr'. We’ll be looking at this shortly.
+
We use 'sprintf' to make the shell command. It first adds a test that the file path leads to a file, then if so the shell command uses the 'ln' command to make a soft link. We use the '-f' option which forces the creation to proceed even if the link already exists. The first argument to 'ln' is the path and the second the basename (last component) of the file path.
+
This command is printed for reference, then it is executed by printing to a process running 'sh' (which will be the Bourne shell or similar by default).
+
Running the script can be achieved thus. We use 'printf' as a simple way of adding a newline to each pathname. The paths come from a filename expansion which includes a question mark. Running it gives the following results:
This is a script which I can use in all sorts of other contexts, though it probably needs some refinement to be completely foolproof.
+
Note that some caution is needed when writing shell commands in awk because of the potential pitfalls when using quotes. See the GNU Awk User’s Guide section 10.2.9 for hints.
+
Redirecting to a coprocess
+
This type of redirection uses a pipe symbol and an ampersand to send output to a string containing a command (or commands) for the shell.
+
print items |& command
+printf format, items |& command
+
This is an advanced feature which is a gawk extension. Unlike the previous redirection, which sends to a program, this form sends to a program and allows the program’s output to be read back. That is why the command is referred to as a coprocess.
+
Since it is necessary to use our next main topic 'getline' to achieve all of this we’ll postpone discussing the subject until the next episode.
+
Redirecting to special files
+
There are three standard Unix channels that are known as standard input, standard output, and standard error output (or more commonly standard error). These are connected to keyboard and screen in the default case.
+
Normally a Unix program or script reads from standard input and writes to standard output and generates any error messages on standard error. There is a lot more to this than described here but this will suffice for the moment.
+
Gnu Awk can use three special file names to access these channels:
+
+
/dev/stdin: standard input
+
/dev/stdout: standard output
+
/dev/stderr: standard error output
+
+
So, for example, a script can write explicitly to standard error with a command of the form:
+
print "Invalid number" > "/dev/stderr"
+
See the GNU Awk User’s Guide section 5.7 on this subject for more details. There are also other special names available as described in the Guide in section 5.8.
+
Next episode
+
I will be continuing with the second half of this episode in a few weeks.
+
+
diff --git a/eps/hpr2816/hpr2816_full_shownotes.pdf b/eps/hpr2816/hpr2816_full_shownotes.pdf
new file mode 100755
index 0000000..704810f
Binary files /dev/null and b/eps/hpr2816/hpr2816_full_shownotes.pdf differ
diff --git a/eps/hpr2816/hpr2816_full_shownotes_abbyy.gz b/eps/hpr2816/hpr2816_full_shownotes_abbyy.gz
new file mode 100755
index 0000000..286ef5c
Binary files /dev/null and b/eps/hpr2816/hpr2816_full_shownotes_abbyy.gz differ
diff --git a/eps/hpr2816/hpr2816_full_shownotes_djvu.txt b/eps/hpr2816/hpr2816_full_shownotes_djvu.txt
new file mode 100755
index 0000000..eea487a
--- /dev/null
+++ b/eps/hpr2816/hpr2816_full_shownotes_djvu.txt
@@ -0,0 +1,476 @@
+Gnu Awk - Part 14 (HPR Show 2816)
+
+Redirection of input and output - part 1
+Dave Morriss
+
+
+
+Gnu Awk - Part 14 (HPR Show 2816)
+
+
+Introduction
+
+This is the fourteenth episode of the “ Learning Awk ” series which is being
+produced by b-veezi and myself.
+
+In this episode and the next I want to start looking at redirection within Awk
+programs. I had originally intended to cover the subject in one episode, but there
+is just too much.
+
+So, in the first episode I will be starting with output redirection and then in the
+next episode will spend some time looking at the get line command used for
+explicit input, often with redirection.
+
+Redirection of output
+
+So far we have seen that when an awk script uses print or printf the output is
+written to the standard output (the screen in most cases). The redirection feature
+in awk allows output to be written elsewhere.
+
+How this is achieved is described in the following sections.
+
+Redirecting to a file
+
+print items > output-file
+printf format, items > output-file
+
+Here, 'items' denotes the items to be printed, 'format' is the format expression
+for 'printf', ' output-file ' is an expression which is converted to a string and
+contains the name of the output file.
+
+Here’s a simple example. It uses the file of fruit data introduced in episode
+number 2. This data file is included with this show ( awk!4 fruit data.txt ):
+
+$ awk 'NR > 1 {print $1 > "fruit_names"}' awkl4_fruit_data.txt
+$ cat fruit_names
+
+
+
+
+
+
+apple
+
+banana
+
+strawberry
+
+grape
+
+apple
+
+plum
+
+kiwi
+
+potato
+
+pineapple
+
+Here the script skips the first line of headers, then prints out the fruit name in
+field 1 to the file called ' fruit_names'. Notice the file name is enclosed in
+quotes because it is a string.
+
+The script will loop once per line of the input file executing the redirection each
+time. However the file contains all of the names in the same order as the input
+file. This is because of the following behaviour:
+
+• The output file is erased before the first output is written to it.
+
+• Subsequent writes to the same file do not erase it but append to it.
+
+It is important to be aware that redirection in Awk is similar to but not the same
+as that in shell scripts.
+
+What we have done here is not really different from running the following
+command where the shell deals with redirection:
+
+$ awk 'NR > 1 {print $1}' awkl4_fruit_data.txt > fruit_names
+
+Here Awk is writing to the standard output stream and the shell is capturing this
+stream and redirecting it to a file. However, things get more complex if the
+requirement is to write to more than one file from a script.
+
+The following downloadable script ( awk!4 exl.awk i writes to a collection of
+output files:
+
+$ cat awkl4_exl.awk
+#!/usr/bin/awk -f
+
+# Downloadable example 1 for GNU Awk Part 14
+
+NR > 1 {
+
+colour = $2
+
+fname = "awkl4 " colour " fruit"
+
+
+
+printf "Writing %s to %s\n",$1,fname
+print $1 > fname
+
+}
+
+Running the script writes to files called ' awkl4_brown_fruit ' and similar in the
+current directory:
+
+$ ./awkl4_exl.awk awkl4_fruit_data.txt
+Writing apple to awkl4_red_fruit
+Writing banana to awkl4_yellow_fruit
+Writing strawberry to awkl4_red_fruit
+Writing grape to awkl4_purple_fruit
+Writing apple to awkl4_green_fruit
+Writing plum to awkl4_purple_fruit
+Writing kiwi to awkl4_brown_fruit
+Writing potato to awkl4_brown_fruit
+Writing pineapple to awkl4_yellow_fruit
+
+The script announces what it’s doing, which is a little superfluous but helps to
+visualise what’s going on.
+
+Notice that since the output file names are generated dynamically and are liable
+to change between each line read from the input file the script is doing what was
+described earlier - creating them (or emptying them if they already exist) and
+then appending to them once open. All the files are closed when the script exits
+of course.
+
+The files created are shown below and the contents of one displayed:
+
+$ Is awkl4_*_fruit
+
+awkl4_brown_fruit awkl4_green_fruit awkl4_purple_fruit
+awkl4_red_fruit awkl4_yellow_fruit
+
+$ cat awkl4_purple_fruit
+
+grape
+
+plum
+
+Redirecting and appending to an existing file
+
+The next type of redirection uses two greater than signs:
+
+print items » output-file
+printf format, items » output-file
+
+
+
+In this case the output file is expected to exist already. If it does then its contents
+are not erased but are appended to. If the file does not exist then it is created and
+written to as before.
+
+When redirecting to a file in a shell script it’s common to see something like
+this:
+
+echo "Script starting" > script.log
+echo "Script ending" » script.log
+
+The use of '» 1 in the second case is necessary because otherwise the file would
+have been cleared out before the message was written. Each redirection like this
+in Bash involves opening and closing the output file.
+
+In an awk script on the other hand - as we have seen - the file is kept open by the
+script until it is closed on exit. There is a ' close' command which will do this
+explicitly, and we will look at this shortly.
+
+Redirecting to another program
+
+This type of redirection uses a pipe symbol to send output to a string containing
+a command (or commands) for the shell.
+
+print items | command
+printf format, items | command
+
+The following example shows the fruit names being written to a pair of
+commands in a shell pipeline:
+
+$ awk 'NR > 1 {print $1 | "sort -u | nl"}' awkl4_fruit_data.txt
+
+1 apple
+
+2 banana
+
+3 grape
+
+4 kiwi
+
+5 pineapple
+
+6 plum
+
+7 potato
+
+8 strawberry
+
+The names are sorted using the ' sort' command, requesting that the results be
+made unique (' -u'). The output from the sort is run through 'nl' which
+numbers the lines.
+
+
+
+As the awk script is run, a sub-process is executed with the two commands. The
+first name is then sent to this process, and this repeats with each successive
+name. The sub-process finishes when the script finishes.
+
+In this case the 'sort' command will have accumulated all the names, then on
+the connection being terminated it will perform the sort and pass the results to
+' nl'.
+
+There is a 'close' command in awk which will close the redirection to the
+command(s) or to a file. The argument to 'close' needs to be the exact
+command(s) which define the process (or the exact file name). For this reason
+it’s a good idea to store the commands or file name in an awk variable.
+
+The following downloadable script ( awk!4 ex2.awk ) shows the variable ' cmd'
+being used to hold the shell commands. The connection is closed to show how it
+would be done, though there is no actual need to do so here.
+
+$ cat awkl4_ex2.awk
+#!/usr/bin/awk -f
+
+# Downloadable example 2 for GNU Awk Part 14
+BEGIN {
+
+cmd = "sort -u | nl"
+
+}
+
+NR > 1 {
+
+print $1 | cmd
+
+}
+
+END {
+
+close(cmd)
+
+}
+
+Running the script gives the same result as before:
+
+$ ./awkl4_ex2.awk awkl4_fruit_data.txt
+
+1 apple
+
+2 banana
+
+3 grape
+
+4 kiwi
+
+5 pineapple
+
+6 plum
+
+7 potato
+
+8 strawberry
+
+
+
+
+Here’s a more real world example (at least it’s real in my world). When I’m
+preparing an HPR show like this which involves a number of example scripts I
+need to run them for testing purposes. I have a main directory for HPR shows
+and a sub-directory per show. I like to make soft links to the examples in this
+sub-directory so I can run tests without hopping about between directories.
+
+In general I make links in this way:
+
+In -s -f PathToExample BasenameOfExample
+
+I wrote an Awk script to help me which takes path names as input and constructs
+shell commands which it pipes into ' sh '.
+
+The following downloadable script ( awk!4 ex3.awk l shows the process.
+
+$ cat awkl4_ex3.awk
+#!/usr/bin/awk -f
+
+# Downloadable example 3 for GNU Awk Part 14
+
+{
+
+# Split the path up into components
+n = split($0,a,"/")
+
+if (n < 2) {
+
+print "Error in path",$0 > "/dev/stderr"
+next
+
+}
+
+# Build the shell command so we can show it
+
+cmd = sprintf("[ -e %s ] && In -s -f %s %s",$0,$0,a[n])
+print "» " cmd
+
+# Feed the command to the shell
+printf("%s\n",cmd) | "sh"
+
+}
+
+END {
+
+close("sh")
+
+}
+
+The script expects to be given one or more pathnames on standard input. It first
+takes the path and splits it up based on the ' /' character. Since ' split' returns
+the number of elements then that number will index the last element. We check
+that it’s sensible before proceeding. Note that the error message generated by the
+' if' test is redirected to ' /dev/stderr 1 . We’ll be looking at this shortly.
+
+
+
+We use 'sprintf ' to make the shell command. It first adds a test that the file
+path leads to a file, then if so the shell command uses the 'In' command to
+make a soft link. We use the ' - f' option which forces the creation to proceed
+even if the link already exists. The first argument to 'In' is the path and the
+second the basename (last component) of the file path.
+
+This command is printed for reference, then it is executed by printing to a
+process running ' sh ' (which will be the Bourne shell or similar by default).
+
+Running the script can be achieved thus. We use 'printf ' as a simple way of
+adding a newline to each pathname. The paths come from a filename expansion
+which includes a question mark. Running it gives the following results:
+
+
+$ printf "%s\n" Gnu_Awk_Part_14/hpr2816/awkl4_ex?.awk | ./awkl4_ex3.awk
+
+» [ -e Gnu_Awk_Part_14/hpr2816/awkl4_exl.awk ] && In -s -f
+
+Gnu_Awk_Part_14/hpr2816/awkl4_exl.awk awkl4_exl.awk
+
+» [ -e Gnu_Awk_Part_14/hpr2816/awkl4_ex2.awk ] && In -s -f
+
+Gnu_Awk_Part_14/hpr2816/awkl4_ex2.awk awkl4_ex2.awk
+
+» [ -e Gnu_Awk_Part_14/hpr2816/awkl4_ex3.awk ] && In -s -f
+
+Gnu_Awk_Part_14/hpr2816/awkl4_ex3.awk awk!4_ex3.awk
+
+
+This is a script which I can use in all sorts of other contexts, though it probably
+needs some refinement to be completely foolproof.
+
+Note that some caution is needed when writing shell commands in awk because
+of the potential pitfalls when using quotes. See the GNU Awk User’s Guide
+section 10.2.9 for hints.
+
+Redirecting to a coprocess
+
+This type of redirection uses a pipe symbol and an ampersand to send output to a
+string containing a command (or commands) for the shell.
+
+print items |& command
+printf format, items |& command
+
+This is an advanced feature which is a gawk extension. Unlike the previous
+redirection, which sends to a program, this form sends to a program and allows
+the program’s output to be read back. That is why the command is referred to as
+a coprocess.
+
+Since it is necessary to use our next main topic ' get line' to achieve all of this
+we’ll postpone discussing the subject until the next episode.
+
+
+
+Redirecting to special files
+
+There are three standard Unix channels that are known as standard input,
+standard output, and standard error output (or more commonly standard error).
+These are connected to keyboard and screen in the default case.
+
+Normally a Unix program or script reads from standard input and writes to
+standard output and generates any error messages on standard error. There is a
+lot more to this than described here but this will suffice for the moment.
+
+Gnu Awk can use three special file names to access these channels:
+
+• /dev/stdin: standard input
+
+• /dev/stdout: standard output
+
+• /dev/stderr: standard error output
+
+So, for example, a script can write explicitly to standard error with a command
+of the form:
+
+print "Invalid number" > "/dev/stderr"
+
+See the GNU Awk User’s Guide section 5.7 on this subject for more details.
+There are also other special names available as described in the Guide in section
+T8.
+
+Next episode
+
+I will be continuing with the second half of this episode in a few weeks.
+
+Links
+
+• GNU Awk User’s Guide
+
+o Redirecting output of print and printf
+° Special Files for Standard Preopened Data Streams
+
+° Special File names in gawk
+
+• Previous shows in this series on HPR:
+
+° “ Gnu Awk - Part 1 ” - episode 2114
+° “ Gnu Awk - Part 2 ” - episode 2129
+
+
+
+
+
+
+
+
+
+
+o
+
+
+“ Gnu Awk - Part 3 ” - episode 2143
+o “ Gnu Awk - Part 4 ” - episode 2163
+° “ Gnu Awk - Part 5 ” - episode 2184
+° “ Gnu Awk - Part 6 ” - episode 2238
+° “ Gnu Awk - Part 7 ” - episode 2330
+° “ Gnu Awk - Part 8 ” - episode 2438
+° “ Gnu Awk - Part 9 ” - episode 2476
+o “ Gnu Awk - Part 10 ” - episode 2526
+° “ Gnu Awk - Part 11 ” - episode 2554
+° “ Gnu Awk - Part 12 ” - episode 2610
+° “ Gnu Awk - Part 13 ” - episode 2804
+Resources:
+
+° ePub version of these notes
+
+° Examples: awk!4 fruit data.txt . awk!4 exl.awk . awk!4 ex2.awk .
+awk!4 ex3.awk
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/eps/hpr2816/hpr2816_full_shownotes_djvu.xml b/eps/hpr2816/hpr2816_full_shownotes_djvu.xml
new file mode 100755
index 0000000..c6fe56e
--- /dev/null
+++ b/eps/hpr2816/hpr2816_full_shownotes_djvu.xml
@@ -0,0 +1,3488 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/eps/hpr2816/hpr2816_full_shownotes_jp2.zip b/eps/hpr2816/hpr2816_full_shownotes_jp2.zip
new file mode 100755
index 0000000..a8bca51
Binary files /dev/null and b/eps/hpr2816/hpr2816_full_shownotes_jp2.zip differ
diff --git a/eps/hpr2816/hpr2816_full_shownotes_scandata.xml b/eps/hpr2816/hpr2816_full_shownotes_scandata.xml
new file mode 100755
index 0000000..da15fd3
--- /dev/null
+++ b/eps/hpr2816/hpr2816_full_shownotes_scandata.xml
@@ -0,0 +1,131 @@
+
+
+
+ 300
+ hpr2816
+ 10
+
+
+
+ Title
+ true
+ 2550
+ 3301
+
+ 0
+ 0
+ 2550
+ 3301
+
+ true
+
+
+ Normal
+ true
+ 2550
+ 3301
+
+ 0
+ 0
+ 2550
+ 3301
+
+
+
+ Normal
+ true
+ 2550
+ 3301
+
+ 0
+ 0
+ 2550
+ 3301
+
+
+
+ Normal
+ true
+ 2550
+ 3301
+
+ 0
+ 0
+ 2550
+ 3301
+
+
+
+ Normal
+ true
+ 2550
+ 3301
+
+ 0
+ 0
+ 2550
+ 3301
+
+
+
+ Normal
+ true
+ 2550
+ 3301
+
+ 0
+ 0
+ 2550
+ 3301
+
+
+
+ Normal
+ true
+ 2550
+ 3301
+
+ 0
+ 0
+ 2550
+ 3301
+
+
+
+ Normal
+ true
+ 2550
+ 3301
+
+ 0
+ 0
+ 2550
+ 3301
+
+
+
+ Normal
+ true
+ 2550
+ 3301
+
+ 0
+ 0
+ 2550
+ 3301
+
+
+
+ Normal
+ true
+ 2550
+ 3301
+
+ 0
+ 0
+ 2550
+ 3301
+
+
+
+
diff --git a/eps/hpr2824/hpr2824_awk15_ex1.awk b/eps/hpr2824/hpr2824_awk15_ex1.awk
new file mode 100755
index 0000000..c8562c1
--- /dev/null
+++ b/eps/hpr2824/hpr2824_awk15_ex1.awk
@@ -0,0 +1,8 @@
+#!/usr/bin/awk -f
+
+# Downloadable example 1 for GNU Awk Part 15
+
+{ print "R1 ---" }
+{ print "R2",$0 }
+{ print "R3",$0 }
+
diff --git a/eps/hpr2824/hpr2824_awk15_ex2.awk b/eps/hpr2824/hpr2824_awk15_ex2.awk
new file mode 100755
index 0000000..6394757
--- /dev/null
+++ b/eps/hpr2824/hpr2824_awk15_ex2.awk
@@ -0,0 +1,8 @@
+#!/usr/bin/awk -f
+
+# Downloadable example 2 for GNU Awk Part 15
+
+{ print "R1 ---" }
+{ print "R2",$0; getline }
+{ print "R3",$0 }
+
diff --git a/eps/hpr2824/hpr2824_awk15_ex3.awk b/eps/hpr2824/hpr2824_awk15_ex3.awk
new file mode 100755
index 0000000..5923de8
--- /dev/null
+++ b/eps/hpr2824/hpr2824_awk15_ex3.awk
@@ -0,0 +1,15 @@
+#!/usr/bin/awk -f
+
+# Downloadable example 3 for GNU Awk Part 15
+
+{
+ if ($NF == "-") {
+ $NF = ""
+ line = $0
+ getline
+ print line $0
+ }
+ else {
+ print $0
+ }
+}
diff --git a/eps/hpr2824/hpr2824_awk15_ex4.awk b/eps/hpr2824/hpr2824_awk15_ex4.awk
new file mode 100755
index 0000000..e0597fd
--- /dev/null
+++ b/eps/hpr2824/hpr2824_awk15_ex4.awk
@@ -0,0 +1,17 @@
+#!/usr/bin/awk -f
+
+# Downloadable example 4 for GNU Awk Part 15
+
+BEGIN {
+ if (ARGC != 2 ) {
+ print "Needs a file name argument" > "/dev/stderr"
+ exit
+ }
+
+ data = ARGV[1]
+
+ while ( (getline line < data) > 0 )
+ print line
+ close(data)
+}
+
diff --git a/eps/hpr2824/hpr2824_awk15_ex5.awk b/eps/hpr2824/hpr2824_awk15_ex5.awk
new file mode 100755
index 0000000..dab3842
--- /dev/null
+++ b/eps/hpr2824/hpr2824_awk15_ex5.awk
@@ -0,0 +1,13 @@
+#!/usr/bin/awk -f
+
+# Downloadable example 5 for GNU Awk Part 15
+
+BEGIN {
+ cmd = "wget -q http://hackerpublicradio.org/stats.php -O -"
+ while ((cmd | getline) > 0) {
+ if ($0 ~ /^Shows in Queue:/)
+ printf "Queued shows on HPR: %d\n", $4
+ }
+ close(cmd)
+}
+
diff --git a/eps/hpr2824/hpr2824_awk15_ex6.awk b/eps/hpr2824/hpr2824_awk15_ex6.awk
new file mode 100755
index 0000000..d2683d5
--- /dev/null
+++ b/eps/hpr2824/hpr2824_awk15_ex6.awk
@@ -0,0 +1,13 @@
+#!/usr/bin/awk -f
+
+# Downloadable example 6 for GNU Awk Part 15
+
+BEGIN {
+ cmd = "wget -q http://hackerpublicradio.org/stats.php -O -"
+ while ((cmd | getline line) > 0);
+ close(cmd)
+
+ split(line,fields,",")
+ printf "Queued shows on HPR: %d\n", fields[10]
+}
+
diff --git a/eps/hpr2824/hpr2824_awk15_ex7.awk b/eps/hpr2824/hpr2824_awk15_ex7.awk
new file mode 100755
index 0000000..fecff04
--- /dev/null
+++ b/eps/hpr2824/hpr2824_awk15_ex7.awk
@@ -0,0 +1,15 @@
+#!/usr/bin/awk -f
+
+# Downloadable example 7 for GNU Awk Part 15
+
+BEGIN {
+ db = "awktest.db"
+ cmd = "sqlite3 " db
+ querytpl = "select id,title from episodes where id = %d;\n"
+}
+
+$0 ~ /^[0-9]+$/ {
+ printf querytpl,$0 |& cmd
+ cmd |& getline result
+ print result
+}
diff --git a/eps/hpr2824/hpr2824_awk15_testdata1 b/eps/hpr2824/hpr2824_awk15_testdata1
new file mode 100755
index 0000000..6a7746a
--- /dev/null
+++ b/eps/hpr2824/hpr2824_awk15_testdata1
@@ -0,0 +1,3 @@
+voluptatibus
+quaerat
+sunt
diff --git a/eps/hpr2824/hpr2824_awk15_testdata2 b/eps/hpr2824/hpr2824_awk15_testdata2
new file mode 100755
index 0000000..01a6764
--- /dev/null
+++ b/eps/hpr2824/hpr2824_awk15_testdata2
@@ -0,0 +1,6 @@
+Dolore eum corporis excepturi. -
+Dolorum nulla qui nemo at earum beatae. Laborum
+quo hic rem aspernatur accusamus -
+praesentium. Impedit eveniet ut reprehenderit
+deleniti aut placeat. -
+Laudantium sapiente eaque dolor.
diff --git a/eps/hpr2824/hpr2824_full_shownotes.epub b/eps/hpr2824/hpr2824_full_shownotes.epub
new file mode 100755
index 0000000..bba48b0
Binary files /dev/null and b/eps/hpr2824/hpr2824_full_shownotes.epub differ
diff --git a/eps/hpr2824/hpr2824_full_shownotes.html b/eps/hpr2824/hpr2824_full_shownotes.html
new file mode 100755
index 0000000..2dfae59
--- /dev/null
+++ b/eps/hpr2824/hpr2824_full_shownotes.html
@@ -0,0 +1,330 @@
+
+
+
+
+
+
+
+ Gnu Awk - Part 15 (HPR Show 2824)
+
+
+
+
+
+
+
+
+
Gnu Awk - Part 15 (HPR Show 2824)
+
Redirection of input and output - part 2
+
Dave Morriss
+
+
+
+
+
+
+
Table of Contents
+
+
+
Introduction
+
This is the fifteenth episode of the “Learning Awk” series which is being produced by b-yeezi and myself.
+
This is the second of a pair of episodes looking at redirection in Awk scripts.
+
In this episode I will spend some time looking at the getline command used for explicit input (as opposed to the usual implicit sort), often with redirection. The getline command is a complex subject which I will cover only relatively briefly. You are directed to the getline section of the GNU Awk User’s Guide for the full details.
+
Redirection of input
+
A reminder of how awk processes rules
+
We are going to look at how awk’s normal input processing is changed in this episode, so I thought it might be a good idea to revisit how things work in the normal course of events.
+
The awk script reads a line from a file or standard input and then scans the (non BEGIN/END) rules that make up the script in the sequence they are listed. If a rule matches then it is run, and the process of matching continues until all rules have been checked. It is entirely possible that multiple rules will match, and they will all be executed if so, in the sequence they are encountered.
+
I have prepared a data file awk15_testdata1 and a simple script awk15_ex1.awk to demonstrate this, both downloadable. The data is generated with the lorem1 command thus:
+
$ printf "%s\n" $(lorem -w 3) > awk15_testdata1
+
The two files are shown here:
+
$ cat awk15_ex1.awk
+#!/usr/bin/awk -f
+
+# Downloadable example 1 for GNU Awk Part 15
+
+{ print "R1 ---" }
+{ print "R2",$0 }
+{ print "R3",$0 }
+
+$ cat awk15_testdata1
+voluptatibus
+quaerat
+sunt
You can see that each rule is run for each line read from the data file. Rule 1 just prints some hyphens and does nothing with the data, but rules 2 and 3 print the line that was read. There is nothing to stop any of these rules from running.
+
The getline command
+
So far we have encountered awk scripts which have read lines from a file or standard input and used them to match patterns which invoke various actions. That is different from the way many other programming languages handle input – and is one of the great strengths of awk.
+
The 'getline' command can be used to read lines explicitly outside the usual read→pattern-match→action cycle of awk.
+
Simple usage
+
The 'getline' command used on its own (with no arguments) reads in the next line and splits it up into fields in the normal way. If used with normal input it affects how data is read and how rules are executed.
+
If 'getline' finds a record it returns 1, and if it encounters the end of the file it returns 0. If there’s an error while reading it returns -1 (and the variable 'ERRNO' will contain a description of the error).
+
The following script (awk15_ex2.awk) is the same as the one just looked at except that it now calls 'getline' inside rule 2.
+
$ cat awk15_ex2.awk
+#!/usr/bin/awk -f
+
+# Downloadable example 2 for GNU Awk Part 15
+
+{ print "R1 ---" }
+{ print "R2",$0; getline }
+{ print "R3",$0 }
+
+
Running the script gives the following result:
+
$ ./awk15_ex2.awk awk15_testdata1
+R1 ---
+R2 voluptatibus
+R3 quaerat
+R1 ---
+R2 sunt
+R3 sunt
+
+
Here it can be see that rule 2 printed the first line read from the data file. The 'getline' call then read the second line, replacing the first one, and rule 3 then printed it. The third line was then read in the normal way and there was nothing for the 'getline' to read, so rules 2 and 3 both printed that last line.
+
The following downloadable example deals with a file of text where some lines have continuations. This is shown by the line ending with a hyphen. The script detects these lines and concatenates them with the next line. The data file (edited output from the 'lorem' command again) is included with this show (awk15_testdata2) and is listed below.
+
$ cat awk15_ex3.awk
+#!/usr/bin/awk -f
+
+# Downloadable example 3 for GNU Awk Part 15
+
+{
+ if ($NF == "-") {
+ $NF = ""
+ line = $0
+ getline
+ print line $0
+ }
+ else {
+ print $0
+ }
+}
+$ cat awk15_testdata2
+Dolore eum corporis excepturi. -
+Dolorum nulla qui nemo at earum beatae. Laborum
+quo hic rem aspernatur accusamus -
+praesentium. Impedit eveniet ut reprehenderit
+deleniti aut placeat. -
+Laudantium sapiente eaque dolor.
+
Running the script (awk15_ex3.awk) gives the following result:
+
$ ./awk15_ex3.awk awk15_testdata2
+Dolore eum corporis excepturi. Dolorum nulla qui nemo at earum beatae. Laborum
+quo hic rem aspernatur accusamus praesentium. Impedit eveniet ut reprehenderit
+deleniti aut placeat. Laudantium sapiente eaque dolor.
+
+
If the last field ('$NF') is a hyphen then it’s deleted and the line is saved. The 'getline' call then re-fills '$0' and it is printed preceded by the saved line. Using 'getline' makes this type of processing simpler.
+
Note that this script is too simple for real use since it doesn’t deal with cases like the final '-' not being separated from the preceding word, and would fail if there was a hyphen ending the last line – and so on.
If 'getline var' is used the next record is read from the main input stream into a variable (var in this example). The record is not split into fields, and variables like 'NF' are not changed. Since the main input stream is being read, variables like 'NR' (number of records) are changed.
+
Reading from a file
+
This is another case of redirection:
+
getline < file
+
Here 'file' is a string expression that specifies the file name.
+
As mentioned earlier, the string expression used here can also be used to close the file with the 'close' command, but has to be specified exactly. Saving the expression in a variable helps with this:
In this fragment it is assumed that 'path' contains a file path, which is concatenated with a slash and a file name to produce the input specification.
+
Reading from a file into a variable
+
This is a concatenation of the previous two forms:
+
getline var < file
+
As before 'file' is a string expression that specifies the file name.
+
The following simple example (downloadable as part of this episode) deals with the file we generated in episode 14 'fruit_names'.
+
$ cat awk15_ex4.awk
+#!/usr/bin/awk -f
+
+# Downloadable example 4 for GNU Awk Part 15
+
+BEGIN {
+ if (ARGC != 2 ) {
+ print "Needs a file name argument" > "/dev/stderr"
+ exit
+ }
+
+ data = ARGV[1]
+
+ while ( (getline line < data) > 0 )
+ print line
+ close(data)
+}
+
+
Note: I did not explain 'ARGC' and 'ARGV' very clearly in the audio. As with other Unix-like systems, 'ARGC' is a numeric variable containing the count of arguments given to the script when it is run from the command line. The arguments themselves are stored in the array 'ARGV', and element zero is always the name of command or script, so 'ARGC' is one greater than expected because of this.
+
Running the script (awk15_ex4.awk) simply lists the file.
This is (another) trivial script presented as an example of how this form of 'getline' can be used. Everything runs in the 'BEGIN' rule. First a check is made to ensure the script has been given an argument (the input file), and if so the name is stored in the variable 'data'. If not an error message is written and the script exits. If all is well a 'while' loop runs, reading lines from the file and printing them. Finally the file is closed.
+
As a seasoned awk user by now you will have realised that the above could have been achieved with the much simpler script:
+
$ awk '{print}' fruit_names
+
Reading from a pipe
+
Using 'command | getline' or 'command | getline var' reads from a command. In the first case the record is split into fields in the usual way, and in the second case it is stored in a variable.
+
The following simple example (awk15_ex5.awk downloadable as part of this episode) runs 'wget' to read the HPR statistics page:
+
$ cat awk15_ex5.awk
+#!/usr/bin/awk -f
+
+# Downloadable example 5 for GNU Awk Part 15
+
+BEGIN {
+ cmd = "wget -q http://hackerpublicradio.org/stats.php -O -"
+ while ((cmd | getline) > 0) {
+ if ($0 ~ /^Shows in Queue:/)
+ printf "Queued shows on HPR: %d\n", $4
+ }
+ close(cmd)
+}
+
+
The statistics include a line 'Shows in Queue: x' which the script checks for. If it is found then the number at the end is extracted (as a normal awk field) and it is displayed with different text. Running the script gives the following result (at the time of generating these notes):
+
$ ./awk15_ex5.awk
+
+Queued shows on HPR: 27
+
+
The following downloadable example (awk15_ex6.awk) is essentially the same as the previous one except that it uses 'command | getline var':
+
$ cat awk15_ex6.awk
+#!/usr/bin/awk -f
+
+# Downloadable example 6 for GNU Awk Part 15
+
+BEGIN {
+ cmd = "wget -q http://hackerpublicradio.org/stats.php -O -"
+ while ((cmd | getline line) > 0);
+ close(cmd)
+
+ split(line,fields,",")
+ printf "Queued shows on HPR: %d\n", fields[10]
+}
+
+
It loops through the lines returned, placing each in the variable 'line' but doing nothing else. This means that the last line is left in the variable at the end. This contains comma-separated numbers which are separated into an array called 'fields' using the 'split' function. The 10th element contains the number of queued shows in this case.
+
Using getline with a coprocess
+
This feature is provided by Gnu Awk and it allows a coprocess to be created which can be written to and read from. In the context of print and printf we send data to the coprocess with the '|&' operator, as we have seen briefly already. Not surprisingly, 'getline' can be used to read data back, either being split up into fields in the normal way, or being saved in a variable.
+
This subject is quite advanced and will not be discussed in much depth here. The GNU Awk User’s Guide can be used to find out more about getline and coprocesses and about the whole subject of Two-Way I/O.
+
The following downloadable example (awk15_ex7.awk) demonstrates a use for this feature. In this case we have a SQLite database. This is a copy of one that I use to keep track of HPR episodes on the Internet Archive and is called awktest.db in this incarnation. It is not included with the show.
+
The command to interact with the database is simply 'sqlite3 awktest.db' and this command can be fed an SQL query of the form:
+
select id,title from episodes where id = ?;
+
Here the '?' represents a show number that is inserted into the query (actually in the form of a 'printf' template using '%d', as you will see). On the command line you can do this type of thing in this way:
+
$ printf 'select id,title from episodes where id = %d;\n' {2796..2800} | sqlite3 awktest.db
+2796 IRS,Credit Freezes and Junk Mail Ohh My!
+2797 Writing Web Game in Haskell - Simulation at high level
+2798 Should Podcasters be Pirates ?
+2799 building an arduino programmer
+2800 My YouTube Subscriptions #6
+
Here is the script:
+
$ cat awk15_ex7.awk
+#!/usr/bin/awk -f
+
+# Downloadable example 7 for GNU Awk Part 15
+
+BEGIN {
+ db = "awktest.db"
+ cmd = "sqlite3 " db
+ querytpl = "select id,title from episodes where id = %d;\n"
+}
+
+$0 ~ /^[0-9]+$/ {
+ printf querytpl,$0 |& cmd
+ cmd |& getline result
+ print result
+}
+
In the 'BEGIN' rule the variables 'db', 'cmd' and 'querytpl' are initialised with the database name, the command to interact with it and a template to be used to construct a query.
+
The main rule looks for numbers which are to be used in the query. If a number is detected a 'printf' command uses the format string in 'querytpl', and the number just received, to generate the query and pass it to the coprocess which is running the database command.
+
Then we use 'getline' to read the result from the database into a variable called 'result' which is printed. Be aware that this is a simple script which does not cater for errors of any kind.
+
There are various ways in which this script could be run. One number could be echoed into it, a string of multiple lines containing numbers could be passed in, as could a file of numbers. It could also read from the terminal and process numbers as they are typed in. We will demonstrate it running with a file of show numbers which is listed before the script is run (but not included in downloadable form):
+
$ cat awk15_ex5.data
+2761
+2789
+2773
+$ ./awk15_ex7.awk awk15_ex5.data
+2761 HPR Community News for February 2019
+2789 Pacing In Storytelling
+2773 Lead/Acid Battery Maintenance and Calcium Charge Voltage
+
There is more that could be said about redirection of input and output, as well as about coprocesses. In fact there are many more subjects within Gnu Awk that could be examined. However, this series will soon be coming to an end.
+
My collaborator b-yeezi and I feel that the areas of Gnu Awk we have not covered in this series might be best left for you to investigate further if you have the need. We both feel that awk is a very useful tool in many respects, but does not stand comparison with more advanced scripting languages such as Python, Ruby and Perl. Perl in particular has borrowed many ideas from Awk but has extended them considerably. Ruby was designed with Perl in mind, and Python has innovated considerably too and is a very widely-used language. Even though Gnu Awk has advanced considerably since it was created it still shows its age and its usefulness is limited.
+
There are cases where quite complex scripts might be written in Awk, but the way most people seem to use it is as part of a pipeline or inside shell scripts of various sorts. Where you might write a complex script in Perl, Python or Ruby (for example), taking on a large project solely in Awk seems like a bad choice today.
+
Before we finish this series it is planned to produce one more episode – number 16. In it b-yeezi and I will record a show together. At the time of writing there is no timescale, but we will endeavour to do this as soon as our schedules allow.
The Lorem Ipsum text here is generated by the 'lorem' command which is installed with the Perl module called Text::Lorem. You can generate words, sentences or paragraphs of pseudo-Latin with it. The module exists as a Debian package called 'libtext-lorem-perl' amongst others.↩
+
+
+
+
+
+
+
diff --git a/eps/hpr2877/hpr2877_IA_form.png b/eps/hpr2877/hpr2877_IA_form.png
new file mode 100755
index 0000000..96f86c9
Binary files /dev/null and b/eps/hpr2877/hpr2877_IA_form.png differ
diff --git a/eps/hpr2877/hpr2877_IA_menu.png b/eps/hpr2877/hpr2877_IA_menu.png
new file mode 100755
index 0000000..f13aaea
Binary files /dev/null and b/eps/hpr2877/hpr2877_IA_menu.png differ
diff --git a/eps/hpr2877/hpr2877_form_example.png b/eps/hpr2877/hpr2877_form_example.png
new file mode 100755
index 0000000..7f14144
Binary files /dev/null and b/eps/hpr2877/hpr2877_form_example.png differ
diff --git a/eps/hpr2877/hpr2877_full_shownotes.html b/eps/hpr2877/hpr2877_full_shownotes.html
new file mode 100755
index 0000000..b4fbc05
--- /dev/null
+++ b/eps/hpr2877/hpr2877_full_shownotes.html
@@ -0,0 +1,302 @@
+
+
+
+
+
+
+
+ Using Zenity with Pdmenu (HPR Show 2877)
+
+
+
+
+
+
+
+
+
Using Zenity with Pdmenu (HPR Show 2877)
+
Zenity is a rather cool program that will display GTK+ dialogs from a script
+
Dave Morriss
+
+
+
+
+
+
+
Table of Contents
+
+
+
Overview
+
I use pdmenu a lot to help me do work on my main desktop PC. I did an HPR show on pdmenu on 13 December 2017 and the author Joey Hess responded in show 2459.
+
In the intervening time I have also integrated Zenity into my menus. This is a GUI tool which generates a number of different pop-up windows known as dialogs, which can display information, or into which information can be typed. The capabilities provided by pdmenu are a little too basic to enable me to do what I need to do.
+
I thought it might be of interest to show some examples of how I use this tool with pdmenu.
+
What is Zenity?
+
According to the manual:
+
+
Zenity is a rewrite of gdialog, the GNOME port of dialog which allows you to display dialog boxes from the command line and shell scripts.
+
+
Zenity runs on Linux, BSD and Windows, and a port to Mac OS X is available.
+
It’s invoked from the command line or a script as:
+
zenity [options]
+
The most important option is one that defines what type of dialog is required. The types are:
+
+
+
+
Dialog Option
+
Description
+
+
+
+
+
--calendar
+
Display calendar dialog
+
+
+
--entry
+
Display text entry dialog
+
+
+
--error
+
Display error dialog
+
+
+
--file-selection
+
Display file selection dialog
+
+
+
--info
+
Display info dialog
+
+
+
--list
+
Display list dialog
+
+
+
--notification
+
Display notification
+
+
+
--progress
+
Display progress indication dialog
+
+
+
--question
+
Display question dialog
+
+
+
--text-info
+
Display text information dialog
+
+
+
--warning
+
Display warning dialog
+
+
+
--scale
+
Display scale dialog
+
+
+
--color-selection
+
Display color selection dialog
+
+
+
--password
+
Display password dialog
+
+
+
--forms
+
Display forms dialog
+
+
+
+
I will not go into lot of detail here because the manpage covers the subject well, as does the online manual. I will show a couple of examples of how I use Zenity dialogs with Pdmenu.
+
Each of these dialog selection options takes a set of further options which control the configuration and behaviour of the dialog window.
+
There are a few general options:
+
+
+
+
Option
+
Description
+
+
+
+
+
--title
+
Window title for the dialog box
+
+
+
--window-icon=ICONPATH
+
Set the window icon.
+
+
+
--width=WIDTH
+
Set the dialog width
+
+
+
--height=HEIGHT
+
Set the dialog height
+
+
+
--timeout=TIMEOUT
+
Set the dialog timeout in seconds
+
+
+
+
The Calendar dialog
+
This takes the following options:
+
+
+
+
Option
+
Description
+
+
+
+
+
--text=STRING
+
Set the dialog text
+
+
+
--day=INT
+
Set the calendar day
+
+
+
--month=INT
+
Set the calendar month
+
+
+
--year=INT
+
Set the calendar year
+
+
+
--date-format=PATTERN
+
Set the format for the returned date
+
+
+
+
Any of the options --day=INT, --month=INT or --year=INT will default to the values for the current day if omitted.
+
The format of the date returned can be controlled with the --date-format=PATTERN option which uses the strftime codes, like %Y for the 4-digit year.
#
+# Menu cnews3 - Update Wiki from wiki.db
+#
+menu:cnews3:Update Wiki from wiki.db:Update Wiki from wiki.db
+ exec:Update the _Wiki for this month:display:$HOME/HPR/Community_News/update_wiki
+ exec:Update the Wiki for any _month:pause:\
+ rep=$(zenity --calendar --title='Update Wiki' --text="Select a month" --date-format='01-%b-%Y') &&\
+ { echo "$rep"; $HOME/HPR/Community_News/update_wiki "$rep"; }
+ nop:--
+ exit:E_xit Update Wiki from wiki.db
+
I keep a SQLite database of HPR shows and my notes about them and use the notes when I am recording the monthly Community News episodes.
+
I have a script which will update a MediaWiki Wiki on one of my Raspberry Pis from this database, and that’s what’s being done here. I have a Wiki page per month of HPR shows and update these individually. That way I have the shows and my notes all organised together.
+
+Pdmenu sub-menu
+
+
For the action "Update the Wiki for any month" the menu will show a Zenity calendar and will return the date of the start of the month as '01-Jul-2019' for example.
+
Notice that zenity is being invoked in a command substitution, and the result is being stored in a variable called rep. The statement setting this variable may succeed (with a value in $rep) or may fail. The && { list } part at the end handles the success case. In it the date returned is echoed (so I know what it is) and is passed to a script that updates the wiki for the particular month.
+
All I have to do to update a particular month when presented with the calendar dialog is to click the '<' or '>' sign to the left or right of the month name and it moves backward or forward one month. I then click 'OK' when I have found the month I need, paying no heed to the day since it’s replaced by '01' in the menu.
+
The Forms Dialog
+
This takes the following options:
+
+
+
+
Option
+
Description
+
+
+
+
+
--text=STRING
+
Set the dialog text
+
+
+
--add-entry=FIELDNAME
+
Add a new Entry in forms dialog
+
+
+
--add-password=FIELDNAME
+
Add a new Password Entry in forms dialog
+
+
+
--add-calendar=FIELDNAME
+
Add a new Calendar in forms dialog
+
+
+
--separator=STRING
+
Set output separator character
+
+
+
--forms-date-format=PATTERN
+
Set the format for the returned date
+
+
+
+
This dialog lets you assemble some other dialog types into a form.
Here is a piece of a menu definition from my .pdmenurc file:
+
exec:_Collect JSON from IA:pause:\
+ rep=$(zenity --forms --text='Enter show numbers' --add-entry=Start --add-entry=End) &&\
+ $HOME/HPR/InternetArchive/collect_show_data "${rep%|*}" "${rep#*|}"
+
I keep another SQLite database with information about HPR shows which have been written to the Internet Archive (archive.org). I have scripts which collect data from archive.org in the JSON format, save it in a file and then add it to my database.
+
The above menu fragment takes in the start and end show numbers in a range that has been uploaded (usually a week’s worth, 5 shows at a time), then it queries archive.org for details of shows.
+
This is what the full menu looks like:
+
+Pdmenu for IA management
+
+
The following dialog is shown to enable show number entry:
+
+Zenity range dialog
+
+
Note that after '&&' in the menu definition we do not need enclosing curly braces since there is only one command.
+
Note also that the zenity call returns two numbers separated by the default separator ('|'). These are separated out when calling the script that does the work, making use of the shell’s parameter substitution facilities. See the Bash Tips shows on HPR, particularly episode 1648 (Remove matching prefix pattern and Remove matching suffix pattern).
+
Summary
+
I need menus since I can never remember what I am supposed to do to carry out workflows and other processes! My motto is “If in doubt, write a script”!
+
The facilities in Pdmenu for prompting and gathering information are a bit too basic for my needs, but Zenity does a fine job for me here. OK, so it has moved me away from my usual preference for command-line stuff, but I find this to be a great compromise!
+
One slight grumble about Pdmenu is that the shell it uses to run commands is 'sh', not 'bash', so some of the Bashisms I’d like to use are not available. It’s not hampering me too much now, and when it does I’ll maybe consider changing the source of pdmenu to do what I want if I can.
+
+
diff --git a/eps/hpr2877/hpr2877_update_wiki_menu.png b/eps/hpr2877/hpr2877_update_wiki_menu.png
new file mode 100755
index 0000000..30139e6
Binary files /dev/null and b/eps/hpr2877/hpr2877_update_wiki_menu.png differ
diff --git a/eps/hpr2877/hpr2877_zenity_calendar.png b/eps/hpr2877/hpr2877_zenity_calendar.png
new file mode 100755
index 0000000..e158799
Binary files /dev/null and b/eps/hpr2877/hpr2877_zenity_calendar.png differ
diff --git a/eps/hpr2968/hpr2968_full_shownotes.epub b/eps/hpr2968/hpr2968_full_shownotes.epub
new file mode 100755
index 0000000..1a6b9dd
Binary files /dev/null and b/eps/hpr2968/hpr2968_full_shownotes.epub differ
diff --git a/eps/hpr2968/hpr2968_full_shownotes.html b/eps/hpr2968/hpr2968_full_shownotes.html
new file mode 100755
index 0000000..87f0b5b
--- /dev/null
+++ b/eps/hpr2968/hpr2968_full_shownotes.html
@@ -0,0 +1,251 @@
+
+
+
+
+
+
+
+ Life and Times of a Geek part 3 (HPR Show 2968)
+
+
+
+
+
+
+
+
+
Life and Times of a Geek part 3 (HPR Show 2968)
+
Part 3 of my personal story of experiences with computers
+
Dave Morriss
+
+
+
+
+
+
+
Table of Contents
+
+
+
Introduction
+
In the last part of my story (show 1811 in 2015) I told you about some of my experiences at the University of Manchester as a postgraduate student from around 1973.
+
Today I want to talk a little more about my time in Manchester and mention some of the things I did that may be of interest to Hackers!
+
Researching for the episode
+
As I have been researching for this HPR episode I realise how long ago some of these events were - in Internet years particularly. In many cases I could find no online records of places, equipment or people. This seems to be because any records there might be are on paper and have never made it online. I contacted a company that made some of the laboratory equipment I used that I thought might be of interest, and the person I contacted said that although he remembered what I was referring to the company had kept no records of it and had had to discontinue it due to modern safety concerns.
+
I find this somewhat dispiriting and it makes me feel very very old!
+
Being a postgraduate student
+
A change of location
+
As I mentioned in the last episode, I started my time at Manchester mainly working in the Animal House in the basement of a building on Coupland Street, in the main University area.
+
It was not an ideal location; it was beset with cockroaches and mice which lived in the utility tunnels that ran between buildings. Not a good place to house animals or to work.
+
An Animal House previously owned by UMIST (University of Manchester Institute of Science and Technology) became free, and we relocated there. This was on the top floor of the Roscoe Building (I think), a newer building across Oxford Road. There were multiple rooms there for offices and to house animals and experimental apparatus, so it was an advantageous move. It required a fair amount of relocation of equipment and materials though.
+
Research
+
It was expected that we’d be able to build our own apparatus if necessary. There was a quite well-equipped workshop in the department and people with the skills to help, but they were busy and often unavailable. So I had some equipment built for me, but ended up making some of my own.
+
Arena
+
I needed an arena in which my Barbary doves would be placed, to observe their behaviour. There was a wooden arena available when I first started but this didn’t prove to be particularly usable since I had to stand beside it and look through a one-way viewing window, and the bird inside could hear me. I ended up recording behavioural data from my doves in a 1-metre square covered arena which I had built myself using Dexion1 and hardboard.
The base of this arena was where the birds were placed but above this I built a 4-sided pyramid for observation. I had access to a monochrome video camera which was mounted at the top of the pyramid. However, it would not work pointing downwards so I had to install it horizontally on a platform and set up a 45-degree mirror to look down. The camera recorded stuff on a reel-to-reel Video Recorder using what I think was ½ inch magnetic tape. I played it back in order to analyse the data using a small black and white monitor.
+
The arena which the camera was recording was painted white and illuminated with fluorescent lights for the best visibility of the experimental animals. There were feeding stations randomly placed around the floor where the birds would find a metered amount of grain.
+
I had drawn out a plan of the arena floor with the feeding stations and each bird’s movement was drawn out on a duplicated copy of this plan once the video tape could be analysed.
+
Skinner Boxes
+
In another series of experiments I was also using a device called a Skinner Box. This is a chamber (an “Operant Conditioning Chamber”) in which an animal is trained to perform some action in response to a stimulus.
+
These devices consisted of a small box with metal sides, and a transparent door. The wall opposite the door was fitted with a perspex panel which operated a micro-switch, and underneath was a hopper of bird seed that could be raised and lowered with an electric motor. The bird in the box was trained to peck the switch and would then receive a reward of seed by raising the hopper. The switch could be illuminated from behind so that the bird would learn to peck only when the light was on, or a particular colour was used.
+
The box itself was built in the department by workshop staff using a product called “Handy Tube”. This consisted of square-section steel tubes that could be fixed together with jointing pieces that were hammered into the ends of the tubes.
+
Campden Instruments Ltd
+
The Skinner box electronics was driven by programmable laboratory equipment. This came from a company called Campden Instruments Limited, and consisted of a series of metal boxes containing components which were to be clipped to metal rods through which they were powered. I can find no information on the power requirements, but a leaflet about a later device from this company shows it required 22-30 volts DC through its power rails but drew a maximum of 50 mA.
+
The departmental workshop had constructed floor-standing racks for these units which could accommodate several rows of the different boxes containing 5 or 6 per row.
+
The boxes contained a variety of electronics, from logic gates such as simple AND and OR gates and INVERTERS to chart recorders and counters. Each box had a number of press-stud connectors on the front so it was possible to use a piece of wire with suitable connectors to connect the output of one to the input of another.
+
I remember configuring this system to turn on a coloured key light in a Skinner Box, detect a key press and trigger the operation of the food hopper. The bird’s behaviour was recorded on a paper chart, and the number of sessions counted on a counter.
+
I recall working out how to make what I think was a flip-flop circuit or possibly an oscillator which I used to make a little speaker buzz when a session finished. This meant I could leave the experiment running and go back to my office where the buzzer was and be alerted when it was finished.
+
As I mentioned, I contacted Campden Instruments about their 1970’s systems but sadly they had no record of these devices any more. They pointed out that such laboratory equipment could not be sold today for safety reasons with the bare power rails.
+
Slides
+
Click a thumbnail to see the full image.
+
+External view of the arena. There is a viewing window, and control buttons nearby. Note the instructions chalked on the blackout material. I was lucky to have help from one of the technicians to run the experiments.
+
+The arena floor with its random pattern of feeding stations. The birds had been trained to remove the (high-tech!) lids and were being tested to see if they preferred one colour over the other when it signalled a greater reward.
+
+A simplified Skinner Box setup. This was used for training the birds. There was constant white noise in the box to prevent the birds being distracted by external sounds. A dim light illuminated the interior and the round key could be illuminated. Later a feeding hopper was added and then used to deliver food rewards.
+
The picture shows the rack developed in-house holding Campden Instruments equipment used to run the training session.
+
+Things got a lot more complicated when an experiment was being run in a Skinner Box. There is a mixture of the Campden Instruments and another vendor’s kit here. This rig was set up for two keys illuminated in different colours. There are counters for pecks on these two colours and for numbers of rewards.
+
It’s a long time ago now but I think the device with what looks like a loop of a plastic material hanging down from it was a way of triggering events at pre-defined intervals. My memory of this is rather hazy though.
+
+One of the experimental doves receiving a reward in the food hopper. The Skinner Box has been configured with two keys and one hopper. The speaker delivered white noise as explained in an earlier slide.
+
+This bird looks as if it’s waiting for one of the keys to illuminate so it can peck it and get a reward.
+
Being a Demonstrator
+
As a postgraduate student it was expected that we would help out with the teaching of undergraduate students. This took the form of being a demonstrator - assisting in laboratory sessions. We were paid for this, so it was usually something we were happy to do. It could require a fair bit of preparation though, since what the undergraduates were being taught was not necessarily something we had learned ourselves as undergraduates.
+
I don’t remember all the lab sessions I worked in over the years I was doing this, but I do remember a few:
+
+
Dissection labs - for first-year students usually. We helped with classes for Medical students, who had the reputation of leaving the worst mess of smashed up animal bits when they’d finished!
+
Microscope labs - looking at slides of stained tissues, identifying and drawing them. Many students were not experienced with microscopes and often couldn’t find their specimens, or crunched through the glass slide with the lens by winding it down without checking.
+
Physiology labs - making physiological preparations with recently killed frogs or cockroaches; monitoring nerve impulses with an oscilloscope.
+
Statistics labs - helping students perform statistical tests on data, and trying to explain what the results meant!
+
Brain labs - my Supervisor ran a lab session dissecting human brains, which were in the Medical School Anatomy laboratory. This was challenging at first since the medical students were often busy working on cadavers at the same time! My fellow PhD students and I had to learn about the structure of the human brain quite rapidly to do this course justice.
+
+
I could recount more anecdotes about these sessions. There are probably a number of HPR episodes that could be made on this subject, but I’ll control myself!
+
Computers
+
Of course, I made use of the computers that I mentioned in the last episode. To be honest, in some cases the use of these resources was more rewarding than the Biology research.
+
Computer Graphics Unit
+
As mentioned before, I had many sheets of paper with plans of my arena onto which animal tracks had been traced - from the video recordings. I needed to turn these into coordinates for computer analysis and to do this I contacted the Computer Graphics Unit, which was part of UMRCC.
+
They had a PDP11 computer which could be used for data capture and analysis, but particularly they had a digitiser. This was a D-MAC device, made by Dobbie McInnes of Glasgow.
+
I have not been able to find much information about this machine other than this StackExchange article, so I will have to try and describe it for you. It consisted of a heavy glass-topped surface on substantial legs. It could be tilted for ease of use. The top was perhaps a metre square, and under the transparent top was a space in which an X and Y sensor ran.
+
The principle of the device was that as a mouse or puck was moved about the table top the X/Y machinery underneath followed it to determine its coordinates. It was possible to configure the D-MAC to output coordinates more or less continuously or when a button on the puck was pressed. Its default mode of output on the models I used was 8-hole paper tape.
+
I would place my sheets on the table one at a time, holding them in place with masking tape. The device could be zeroed to a corner of the picture, then the track of the bird could be traced, pressing the output button at each point visited.
+
The Computer Graphics Unit stopped providing this service during the period I needed it, but I was able to visit a local hospital (The Christie Hospital not too far from the University) which had a D-MAC which they kindly let me use.
+
Data General Nova
+
A research group studying fish vision within the Zoology Department had bought a mini-computer, a Data General Nova. I don’t recall the model but it may have been a 1200. It was to be used to run experiments in a lab close to where I worked. It was initially set up in the office I was using and I was given free rein to use it for a while.
+
This machine was 16-bits with ferrite core memory – I have no record of how much – maybe 16K or even 32K. It had a paper tape reader and punch and a teletype. I remember a FORTRAN compiler being bought with it, and this was in the form of paper tape.
+
To start the machine it was necessary to enter the boot loader by hand using the switches on the front panel. Then the loader program was brought in from paper tape and this could be used to load the compiler or compiled programs.
+
Using the teletypes on the ICL 1906A
+
I mentioned the ICL 1906A in my last episode. I found that there was a room of terminals and ASR 33 teletypes that were available to users of the central computing facilities. I learnt how to use the GEORGE operating system on these teletypes, and found out how to write and store programs and data on the ICL machine. The user interface used on the teletypes was called MOP (Multiple Online Programming).
It was possible to prepare work on the ICL via MOP and then to submit it as a batch job to the CDC 7600. As noted on Wikipedia, the version of GEORGE used at UMRCC had been modified to allow this.
+
It is now possible to run an emulation of GEORGE 3 on the Raspberry Pi!
+
Cyber-72
+
I also used another computer at UMRCC, though just in an exploratory way. This was the CDC Cyber-72. My memory of this machine is hazy now, and it may have been a Cyber-76, or perhaps the 72 was replaced by a 76 at some point.
+
This machine had terminals with good screens, and offered a variety of programming languages. I used it to experiment with APL, a programming language developed in the 1960s by Kenneth E. Iverson.
+
APL uses a range of symbols in the language. As I recall, the Cyber-72 version didn’t support these, but it was still quite usable. I was able to write a simple statistics program in it to prove it could be done, but I didn’t carry on using it thereafter since it seemed too limiting in terms of availability.
+
Funding
+
As I mentioned, I paid for my first year myself from money saved from working during my year out. I managed to get some grant funding for one or two further years, but this was not enough. I was lucky to get a part-time job within the Zoology department as a Laboratory Technician to help with funding.
+
This involved helping to set up laboratory sessions, but mostly I was called upon to do driving work. I’d ferry students about from time to time and sometimes buy or collect things for the department at various places around Manchester and its surroundings.
+
Notable events I now recall:
+
+
Taking a fellow postgraduate student to collect freshwater mussels in a stream just beside the Jodrell Bank radio telescope.
+
Collecting shellfish at Llandudno Bay in North Wales
+
A regular trip to buy maggots at a local fishing supplies shop
+
Collecting dead gulls around a reservoir (probably Audenshaw Reservoir) for a parasite check
+
Helping to catch fish (perch) with nets in various lakes for experimental purposes
+
Collecting large amounts of cow’s blood from a local abattoir to be examined for parasites
I wanted to own a calculator at this time. I’d been seeing advertisements for the Sinclair Scientific in newspapers and the like for some time. I had learnt to solder at school - in a very basic way using a tinsmith’s soldering iron. This is a shaped chunk of copper on a handle that can be heated in a gas flame like a blowtorch and used to melt solder into a joint.
+
When younger I had also done some soldering at home, using my father’s electric soldering iron I think.
+
Around this time I bought myself an electric iron from Antex and a few accessories such as a soldering iron stand and a clip-on heatsink to prevent fragile items from getting too hot as they were soldered.
+
I bought a Sinclair Scientific kit for the advertised price of £9.95 and managed to build it without any serious mishaps.
it used Reverse Polish Notation to enter calculations
+
it had no decimal point but used exponential notation instead
+
it used repeated addition to multiply numbers and subtraction to divide them
+
other functions such as log and antilog were achieved by iterative methods so the calculator was very slow!
+
+
Check the links for the details if you are interested.
+
Moving on
+
I had spent over 4 years on the research for my PhD. Some of it I paid for myself, as I have told you, though I received a grant for part of it.
+
However, during that time I realised the research topic I’d embarked upon was going nowhere, and it gradually dawned on me that I was not cut out to be a researcher.
+
During the time I’d learnt a lot of stuff including a bit of Electronics, a lot of Biology and a great deal of Computer Science. I felt I’d be better leaving Manchester to see if I could get a job in IT somewhere, and this is what I did.
+
I’ll tell you more in the next episode.
+
Links
+
+
Building experimental apparatus:
+
+
Wikipedia page for Dexion - for building metal structures
I still have some lengths of what I assume to be Dexion in my attic, acquired from my last employer who was throwing it away, and recently made some storage shelves with it!↩
+
+
+
+
+
+
+
diff --git a/eps/hpr2968/hpr2968_img_001.jpg b/eps/hpr2968/hpr2968_img_001.jpg
new file mode 100755
index 0000000..4007c70
Binary files /dev/null and b/eps/hpr2968/hpr2968_img_001.jpg differ
diff --git a/eps/hpr2968/hpr2968_img_002.jpg b/eps/hpr2968/hpr2968_img_002.jpg
new file mode 100755
index 0000000..ed45193
Binary files /dev/null and b/eps/hpr2968/hpr2968_img_002.jpg differ
diff --git a/eps/hpr2968/hpr2968_img_003.jpg b/eps/hpr2968/hpr2968_img_003.jpg
new file mode 100755
index 0000000..de22c7a
Binary files /dev/null and b/eps/hpr2968/hpr2968_img_003.jpg differ
diff --git a/eps/hpr2968/hpr2968_img_004.jpg b/eps/hpr2968/hpr2968_img_004.jpg
new file mode 100755
index 0000000..d8dcc60
Binary files /dev/null and b/eps/hpr2968/hpr2968_img_004.jpg differ
diff --git a/eps/hpr2968/hpr2968_img_005.jpg b/eps/hpr2968/hpr2968_img_005.jpg
new file mode 100755
index 0000000..e009756
Binary files /dev/null and b/eps/hpr2968/hpr2968_img_005.jpg differ
diff --git a/eps/hpr2968/hpr2968_slide_1.jpg b/eps/hpr2968/hpr2968_slide_1.jpg
new file mode 100755
index 0000000..9012a43
Binary files /dev/null and b/eps/hpr2968/hpr2968_slide_1.jpg differ
diff --git a/eps/hpr2968/hpr2968_slide_1_tn.png b/eps/hpr2968/hpr2968_slide_1_tn.png
new file mode 100755
index 0000000..a8f9a19
Binary files /dev/null and b/eps/hpr2968/hpr2968_slide_1_tn.png differ
diff --git a/eps/hpr2968/hpr2968_slide_2.jpg b/eps/hpr2968/hpr2968_slide_2.jpg
new file mode 100755
index 0000000..f4dbfcd
Binary files /dev/null and b/eps/hpr2968/hpr2968_slide_2.jpg differ
diff --git a/eps/hpr2968/hpr2968_slide_2_tn.png b/eps/hpr2968/hpr2968_slide_2_tn.png
new file mode 100755
index 0000000..00828f4
Binary files /dev/null and b/eps/hpr2968/hpr2968_slide_2_tn.png differ
diff --git a/eps/hpr2968/hpr2968_slide_3.jpg b/eps/hpr2968/hpr2968_slide_3.jpg
new file mode 100755
index 0000000..2d30c7c
Binary files /dev/null and b/eps/hpr2968/hpr2968_slide_3.jpg differ
diff --git a/eps/hpr2968/hpr2968_slide_3_tn.png b/eps/hpr2968/hpr2968_slide_3_tn.png
new file mode 100755
index 0000000..e4342ca
Binary files /dev/null and b/eps/hpr2968/hpr2968_slide_3_tn.png differ
diff --git a/eps/hpr2968/hpr2968_slide_4.jpg b/eps/hpr2968/hpr2968_slide_4.jpg
new file mode 100755
index 0000000..357478c
Binary files /dev/null and b/eps/hpr2968/hpr2968_slide_4.jpg differ
diff --git a/eps/hpr2968/hpr2968_slide_4_tn.png b/eps/hpr2968/hpr2968_slide_4_tn.png
new file mode 100755
index 0000000..27a8e93
Binary files /dev/null and b/eps/hpr2968/hpr2968_slide_4_tn.png differ
diff --git a/eps/hpr2968/hpr2968_slide_5.jpg b/eps/hpr2968/hpr2968_slide_5.jpg
new file mode 100755
index 0000000..c047e5f
Binary files /dev/null and b/eps/hpr2968/hpr2968_slide_5.jpg differ
diff --git a/eps/hpr2968/hpr2968_slide_5_tn.png b/eps/hpr2968/hpr2968_slide_5_tn.png
new file mode 100755
index 0000000..3991ac0
Binary files /dev/null and b/eps/hpr2968/hpr2968_slide_5_tn.png differ
diff --git a/eps/hpr2968/hpr2968_slide_6.jpg b/eps/hpr2968/hpr2968_slide_6.jpg
new file mode 100755
index 0000000..8bed509
Binary files /dev/null and b/eps/hpr2968/hpr2968_slide_6.jpg differ
diff --git a/eps/hpr2968/hpr2968_slide_6_tn.png b/eps/hpr2968/hpr2968_slide_6_tn.png
new file mode 100755
index 0000000..c3568e1
Binary files /dev/null and b/eps/hpr2968/hpr2968_slide_6_tn.png differ
diff --git a/eps/hpr3004/hpr3004_full_shownotes.html b/eps/hpr3004/hpr3004_full_shownotes.html
new file mode 100755
index 0000000..785b6fc
--- /dev/null
+++ b/eps/hpr3004/hpr3004_full_shownotes.html
@@ -0,0 +1,103 @@
+
+
+
+
+
+
+
+ Fixing simple audio problems with Audacity (HPR Show 3004)
+
+
+
+
+
+
+
+
+
Fixing simple audio problems with Audacity (HPR Show 3004)
+
Sharing a few experiences with Audacity that may be helpful to others
+
Dave Morriss
+
+
+
+
+
+
+
Table of Contents
+
+
+
Overview
+
I recorded the audio for the show I did with MrX in late 2019: “hpr2972 :: The foot of the ski slope”. I was using my Zoom H2n recorder in my car, on a small tripod placed on the dashboard. Something about this setup caused the result to be very boomy and (to me) unpleasant to listen to. This episode is about what I did for a cure, after some research.
+
I have also been using the ‘Truncate Silence’ effect in Audacity incorrectly in the past, and I used the opportunity to learn how to do a better job with it.
+
Now, I am well aware that there are some skilled and experienced Audio Engineers out there in HPR-land. I am certainly not one of these, though I quite enjoy fiddling with audio to make it sound better. I’d like to make two requests:
+
+
If I didn’t do a good job, please tell me what I did wrong here, and how I should have done it.
+
Think about doing a show (or shows) on HPR about how to deal with common audio problems. For example: how to remove a mains hum, the use of compression and normalisation.
+
+
Steps taken to clean the audio
+
Noise reduction
+
I always do this because most of what I produce has background noise from my house, mains hum, etc. It’s a simple procedure. It might not have been necessary here, but I did it nevertheless.
+
See the Audacity Manual for a description of how this is done. In brief, the steps are:
+
+
Sample a piece of the audio to get a Noise Profile. I tend to leave the recorder running for a short while before I start speaking so that there’s a suitable piece to sample.
+
Select everything, tune the settings, and run the effect
+
You should see a change to the waveform in Audacity. I usually see that the silences are a lot less noisy.
+
+
High-pass filter
+
This is a way of reducing low-frequency noise. My feeling was that the “hollow” nature of the original audio was due to low frequencies echoing around the car, so I tried this to see if it helped, and found that it did.
… passes frequencies above its cutoff frequency and attenuates frequencies below its cutoff frequency.
+
+
I set the cutoff frequency to 500.0 Hz, and the Roll-off (dB per octave) to 6dB. I tried 1000.0 Hz first but the result was truly awful - I assume it removed pretty much everything!
+
Amplification
+
Since the previous filter had reduced the volume overall, I applied an amplification. This effect is documented in the Audacity Manual. When the Amplify dialog is first shown there is a value in the Amplification box which, if applied, will produce a new peak amplitude of 0dB. In my case this was 1.106dB and I just used that.
+
Actually this might not have been enough because the end result sounded fairly quiet when listening to it on the HPR feed, though it sounded fine played through Audacity.
+
Silence Truncation
+
There’s a certain art to using this effect properly. I find my sentences seem to trail off a bit when I speak, and this confuses the truncation algorithm. Listening back to my shows I often notice that the final word I’m saying before a pause is truncated. There are a lot of silences in the audio I produce by myself, but there were a lot fewer in this particular case.
+
However, when I applied the usual silence truncation settings to the audio, there were some quite unpleasant truncations at the start and end of words. This made me want to tune things so as to avoid this.
+
The Truncate Silence effect is documented in the Audacity Manual. Here are the settings I used:
+
+
Threshold: -25dB. I tried -20 but some of the quieter beginnings of words were truncated
+
Duration: 0.5 seconds
+
Compress Excess Silence option selected
+
Compress to: 50%
+
+
I had not used these setting before, but had chosen the Truncate Detected Silence option and had not been using the best Threshold value.
+
Demonstration
+
In the audio I have included a demonstration of a piece of audio taken from show 2972, in its original form, after noise reduction, after the high-pass filter had been applied, after amplification and after silence truncation. I hope you find it useful.
+
Conclusion
+
I thought the final audio sounded much better, and the silence truncation wasn’t messing up our speech as it had done before.
+
I hope these details will help others who need to process their audio!
+
+
diff --git a/eps/hpr3013/hpr3013_bash21_ex1.sh b/eps/hpr3013/hpr3013_bash21_ex1.sh
new file mode 100755
index 0000000..a648a5d
--- /dev/null
+++ b/eps/hpr3013/hpr3013_bash21_ex1.sh
@@ -0,0 +1,14 @@
+#!/usr/bin/env bash
+
+#-------------------------------------------------------------------------------
+# Example 1 for Bash Tips show 21: the environment
+#-------------------------------------------------------------------------------
+
+# Not expected to be in the environment
+bt211C=somedata
+
+echo "Args: $*"
+
+printenv
+
+exit
diff --git a/eps/hpr3013/hpr3013_bash21_ex2.sh b/eps/hpr3013/hpr3013_bash21_ex2.sh
new file mode 100755
index 0000000..f0a5c40
--- /dev/null
+++ b/eps/hpr3013/hpr3013_bash21_ex2.sh
@@ -0,0 +1,25 @@
+#!/usr/bin/env bash
+
+#-------------------------------------------------------------------------------
+# Example 2 for Bash Tips show 21: the environment
+#-------------------------------------------------------------------------------
+
+BTversion='21'
+export BTversion
+
+echo "** Using 'grep' with 'env'"
+env | grep -E '(EDITOR|SHELL|BTversion)='
+echo
+
+echo "** Using 'printenv' with arguments"
+printenv EDITOR SHELL BTversion
+echo
+
+echo "** Using 'grep' with 'export'"
+export | grep -E '(EDITOR|SHELL|BTversion)='
+echo
+
+echo "** Using 'grep' with 'declare'"
+declare -x | grep -E '(EDITOR|SHELL|BTversion)='
+
+exit
diff --git a/eps/hpr3013/hpr3013_bash21_ex3.awk b/eps/hpr3013/hpr3013_bash21_ex3.awk
new file mode 100755
index 0000000..52168e3
--- /dev/null
+++ b/eps/hpr3013/hpr3013_bash21_ex3.awk
@@ -0,0 +1,10 @@
+#!/usr/bin/awk -f
+
+#-------------------------------------------------------------------------------
+# Example 3 for Bash Tips show 21: printing the environment in Awk
+#-------------------------------------------------------------------------------
+
+BEGIN{
+ for (n in ENVIRON)
+ printf "ENVIRON[%s]=%s\n",n,ENVIRON[n]
+}
diff --git a/eps/hpr3013/hpr3013_bash21_ex4.sh b/eps/hpr3013/hpr3013_bash21_ex4.sh
new file mode 100755
index 0000000..349049d
--- /dev/null
+++ b/eps/hpr3013/hpr3013_bash21_ex4.sh
@@ -0,0 +1,22 @@
+#!/usr/bin/env bash
+
+#-------------------------------------------------------------------------------
+# Example 4 for Bash Tips show 21: a way of showing environment variables
+#-------------------------------------------------------------------------------
+
+#
+# We expect one or more arguments
+#
+if [[ $# = 0 ]]; then
+ echo "Usage: $0 variable_name"
+ exit 1
+fi
+
+#
+# Loop through the arguments reporting their attributes with 'declare'
+#
+for arg; do
+ declare -p "$arg"
+done
+
+exit
diff --git a/eps/hpr3013/hpr3013_bash21_ex5.sh b/eps/hpr3013/hpr3013_bash21_ex5.sh
new file mode 100755
index 0000000..c4d72c5
--- /dev/null
+++ b/eps/hpr3013/hpr3013_bash21_ex5.sh
@@ -0,0 +1,18 @@
+#-------------------------------------------------------------------------------
+# Example 5 for Bash Tips show 21: a few things exported in my .bashrc
+#-------------------------------------------------------------------------------
+#
+# The PATH variable gets my local ~/bin directory added to it
+#
+export PATH="${PATH}:$HOME/bin"
+#
+# The above is the older way of doing this. It is possible to write the
+# following using '+=' to concatenate a string onto a variable:
+# export PATH+=":$HOME/bin"
+
+#
+# Some tools need a default editor. The only one for me is Vim
+#
+export EDITOR=/usr/bin/vim
+export VISUAL=/usr/bin/vim
+
diff --git a/eps/hpr3013/hpr3013_bash21_ex6.sh b/eps/hpr3013/hpr3013_bash21_ex6.sh
new file mode 100755
index 0000000..6a2dda0
--- /dev/null
+++ b/eps/hpr3013/hpr3013_bash21_ex6.sh
@@ -0,0 +1,39 @@
+#!/usr/bin/env bash
+
+#-------------------------------------------------------------------------------
+# Example 6 for Bash Tips show 21: a poor way to make a configuration file
+#-------------------------------------------------------------------------------
+
+#
+# Example configuration file using 'export'
+#
+CFG1='bash21_ex6_1.cfg'
+if [[ ! -e $CFG1 ]]; then
+ echo "Unable to find $CFG1"
+ exit 1
+fi
+
+#
+# Alternative configuration file using 'declare', converted from the other one
+#
+CFG2='bash21_ex6_2.cfg'
+
+#
+# Strip out all of the 'export' commands with 'sed' in a process substitution,
+# turning the lines into simple variable declarations. Use 'source' to obey
+# all of the resulting commands
+#
+source <(sed 's/^export //' $CFG1)
+
+#
+# Scan the (simple) variables beginning with '_CFG_' and convert them into
+# a portable form by saving the output of 'declare -p'
+#
+declare -p "${!_CFG_@}" > $CFG2
+
+#
+# Now next time we can 'source' this file instead when we want the variables
+#
+cat $CFG2
+
+exit
diff --git a/eps/hpr3013/hpr3013_bash21_ex6_1.cfg b/eps/hpr3013/hpr3013_bash21_ex6_1.cfg
new file mode 100755
index 0000000..156b4f7
--- /dev/null
+++ b/eps/hpr3013/hpr3013_bash21_ex6_1.cfg
@@ -0,0 +1,16 @@
+export _CFG_PROJECT="Bash_Tips__21"
+export _CFG_HOSTID=225
+export _CFG_HOSTNAME="Dave Morriss"
+export _CFG_SUMMARY="Environment variables"
+export _CFG_TAGS="Bash,variable,environment"
+export _CFG_EXPLICIT="Yes"
+export _CFG_FILES=(hpr____.html hpr____.tbz)
+export _CFG_STRUCTURE="Tree"
+export _CFG_SERIES="Bash Scripting"
+export _CFG_NOTETYPE="HTML"
+export _CFG_SUMADDED="No"
+export _CFG_INOUT="No"
+export _CFG_EMAIL="blah@blah"
+export _CFG_TITLE="Bash Tips - 21"
+export _CFG_STATUS="Editing"
+export _CFG_SLOT=""
diff --git a/eps/hpr3013/hpr3013_bash21_ex6_2.cfg b/eps/hpr3013/hpr3013_bash21_ex6_2.cfg
new file mode 100755
index 0000000..b229b13
--- /dev/null
+++ b/eps/hpr3013/hpr3013_bash21_ex6_2.cfg
@@ -0,0 +1,16 @@
+declare -- _CFG_EMAIL="blah@blah"
+declare -- _CFG_EXPLICIT="Yes"
+declare -a _CFG_FILES=([0]="hpr____.html" [1]="hpr____.tbz")
+declare -- _CFG_HOSTID="225"
+declare -- _CFG_HOSTNAME="Dave Morriss"
+declare -- _CFG_INOUT="No"
+declare -- _CFG_NOTETYPE="HTML"
+declare -- _CFG_PROJECT="Bash_Tips__21"
+declare -- _CFG_SERIES="Bash Scripting"
+declare -- _CFG_SLOT=""
+declare -- _CFG_STATUS="Editing"
+declare -- _CFG_STRUCTURE="Tree"
+declare -- _CFG_SUMADDED="No"
+declare -- _CFG_SUMMARY="Environment variables"
+declare -- _CFG_TAGS="Bash,variable,environment"
+declare -- _CFG_TITLE="Bash Tips - 21"
diff --git a/eps/hpr3013/hpr3013_full_shownotes.html b/eps/hpr3013/hpr3013_full_shownotes.html
new file mode 100755
index 0000000..f6381dd
--- /dev/null
+++ b/eps/hpr3013/hpr3013_full_shownotes.html
@@ -0,0 +1,567 @@
+
+
+
+
+
+
+
+ Bash Tips - 21 (HPR Show 3013)
+
+
+
+
+
+
+
+
+
Bash Tips - 21 (HPR Show 3013)
+
Environment variables
+
Dave Morriss
+
+
+
+
+
+
+
Table of Contents
+
+
+
The Environment (More collateral Bash tips)
+
Overview
+
You will probably have seen references to The Environment in various contexts relating to shells, shell scripts, scripts in other languages and compiled programs.
+
In Unix and Unix-like operating systems an environment is maintained by the shell, and we will be looking at how Bash deals with this in this episode. When a script, program or subprocess is invoked it is given an array of strings called the environment. This is a list of name-value pairs, of the form name=value.
+
Using the environment
+
The environment is used to convey various pieces of information to the executing script or program. For example, two standard variables provided by the shell are 'HOME', which is set to the current user’s home directory and 'PWD, set to the current working directory. The shell user can set, change, remove and view environment variables for their own purposes as we will see in this episode. The Bash shell itself creates and in some cases manages environment variables.
+
The environment contains global data which is passed down to subprocesses (child processes) by copying. However, it is not possible for a subprocess to pass information back to the superior (parent) process.
+
Viewing the environment
+
You can view the environment in a number of ways.
+
+
From the command line the command printenv can do this (this is usually but not always a stand-alone command: it’s /usr/bin/printenv on my Debian system). We will look at this command later.
+
The command env without any arguments does the same thing as printenv without arguments. This is actually a tool to run a program in a modified environment which we will look at later. The environment printing capability can be regarded as more of a bonus feature.
+
Scripting languages like awk (as well as Python and Perl, to name just a few) can view and manipulate the environment.
+
Compiled languages such as C can do this too of course.
+
There are other commands that will show the environment, and we will look at some of these briefly.
+
+
Changing variables in the environment
+
The variables in the environment are not significantly different from the shell parameters we have seen throughout this Bash Tips series. The only difference is that they are marked for export to commands and sub-shells. You will often see variables (or parameters) in the environment referred to as environment variables. The Bash manual makes a distinction between ordinary parameters (variables) and environment variables, but many other sources are less precise about this in my experience.
+
The standard variables in the environment have upper-case names (HOME, SHELL, PWD, etc), but there is no reason why a variable you create should not be in lower or mixed case. In fact, the Bash manual suggests that you should avoid using all upper-case names so as not to clash with Bash’s variables.
+
Variables can be created and changed a number of ways.
+
+
They can be set up at login time (globally or locally) through various standard configuration files. It is intended to look at this subject in an upcoming episode so we will leave discussing the subject until then.
+
By preceding the command or script invocation with name=value expressions which will temporarily place these variables into the environment for the command
+
Using the export command
+
Using the declare command with the -x option
+
The value of an environment variable (once established) can be changed at any time in the sub-shell with a command like myvar=42, just as for a normal variable
+
The export command can also be used to turn off the export marker on a variable
+
Deletion is performed with the unset command (as seen earlier in the series)
+
+
We will look at all of these features in more detail later in the episode.
+
A detailed look
+
Temporary addition to a command’s environment
+
As summarised above, a command can be preceded by name=value definitions, and these set environment variables while the command is running.
+
For example, if an awk script has been placed in /tmp like this:
(where CTRL+D means to press D while holding down the CTRL key).
+
It is now possible to invoke awk to execute this file by giving it the environment variable AWKPATH. This is a list of directories where awk looks to find script files.
+
$ AWKPATH=/tmp awk -f awktest
+Hello World!
+
Note that:
+
+
The file is found even though the '.awk' part has been omitted; this is something that awk does when searching AWKPATH
+
The setting of AWKPATH is separated from the awk command by a space - not a semi-colon. (If a semi-colon had been used then there would have been two statements on the line, which would not have achieved what was wanted.)
+
The variable AWKPATH is not changed in the parent process (the process that ran awk), the change is temporary, is in the child process, and lasts only as long as the command runs
+
+
Commands relating to the environment
+
The printenv command
+
The printenv command without arguments lists all the environment variables. It may be followed by a list of variable names, in which case the output is restricted to these variables.
+
When no arguments are given the output consists of name=value pairs, but if variable names are specified just the values are listed.
+
This command might be built into the shell, but this is not the case with Bash.
+
The env command
+
This is a shell command which will print a list of environment variables or run another command in an altered environment.
Without the COMMAND part env is functionally equivalent to printenv.
+
Options are:
+
+
+
+
Option
+
Meaning
+
+
+
+
+
-, -i, --ignore-environment
+
start with an empty environment
+
+
+
-0, --null
+
(zero) end each output line with 0 byte rather than newline
+
+
+
-u, --unset=NAME
+
remove variable from the environment
+
+
+
-C, --chdir=DIR
+
change working directory to DIR
+
+
+
-S, --split-string=S
+
process and split S into separate arguments;
+
+
+
+
used to pass multiple arguments on shebang lines
+
+
+
-v, --debug
+
show verbose information for each processing step
+
+
+
+
The NAME=VALUE part is where environment variables are defined for the command being run.
+
The env command is often used in shell scripts to run the correct interpreter on the hash bang or shebang line (the first line of the file which begins with '#!') without needing to know its path. It is necessary to know the path of env but this is usually (almost invariably) /usr/bin/env.
+
For example, to run a python3 script you might begin with:
+
#!/usr/bin/env python3
+
The -S option is required if the interpreter needs options of its own. For example:
+
$ cat awktest1
+#!/usr/bin/env awk -f
+BEGIN{ print "Hello World" }
+$ ./awktest1
+/usr/bin/env: ‘awk -f’: No such file or directory
+/usr/bin/env: use -[v]S to pass options in shebang lines
+
+$ cat awktest2
+#!/usr/bin/env -S awk -f
+BEGIN{ print "Hello World" }
+$ ./awktest2
+Hello World
+
Script awktest1 fails because env misunderstands 'awk -f' whereas awktest2, which uses -S works fine.
+
The env command can be run from the command line as a means of running a command with a special environment. For example:
This defines environment variable MSG and runs 'printenv MSG' to show its value, which is destroyed as soon as the command has finished. The second printenv does nothing because MSG has gone.
Here debug mode is on, and the '-i' option clears the environment of all but MSG. We don’t specify MSG as an argument this time; it’s unnecessary because that variable is all there is in the child environment.
+
Consult the GNU Coreutils Manual for more details of the env command. Note that the version of env being described here is 8.30.
+
The declare command
+
As we saw earlier in this series, declare can be used to create variables (and arrays). If the '-x' option is added to the command then the variables created are also marked for export to the environment used by subsequent commands. Note that arrays cannot be exported in any current versions of Bash. It was apparently planned to do this in the past, but it has not been implemented.
+
The option '+x' can also remove the indicator that makes a variable exported.
+
As expected declare -p can be used to show the declare command required to create a variable and the same applies to looking at environment variables using declare -p -x.
+
The export command
+
This command marks variables to be passed to child processes in the environment.
+
export [-fn] [-p] [name[=value] ...]
+
By default the names refer to shell variables, which can be given a value if desired.
+
Options are:
+
+
+
+
Option
+
Meaning
+
+
+
+
+
-n
+
the name arguments are marked not to be exported
+
+
+
-f
+
the name arguments are the names of defined shell functions
+
+
+
-p
+
displays output in a form that may be reused as input (see declare)
+
+
+
+
Writing 'export -p' with no arguments causes the environment to be displayed in a similar way to 'declare -x -p'.
+
Looking again at the awk example from earlier we could use export to set AWKPATH, but the setting will persist after the process running awk has finished:
You might see export being used in the following way in older scripts:
+
TZ='Europe/London'; export TZ
+
This is perfectly acceptable, but the single-statement form is most common in more recent scripts.
+
The set command
+
We have looked at some of the features that this command offers in other contexts but have not yet examined it in detail. This detailed analysis is overdue, but (for brevity) will be left until a later episode of Bash Tips.
+
For now we will look at set in the context of environment variables.
+
When set is used without any options or arguments then it performs the following function (quoted from the Gnu Bash Manual):
+
+
set displays the names and values of all shell variables and functions, sorted according to the current locale, in a format that may be reused as input for setting or resetting the currently-set variables.
+
+
This is a significant amount of output, so it is not a recommended way of examining the environment.
+
The '-k' option performs the following function (again a quote):
+
+
All arguments in the form of assignment statements are placed in the environment for a command, not just those that precede the command name.
+
+
Example: using set -k
+
What this means is actually quite simple. The following example demonstrates what setting '-k' does. The script bash21_ex1.sh is fairly simple:
+
#!/usr/bin/env bash
+
+#-------------------------------------------------------------------------------
+# Example 1 for Bash Tips show 21: the environment
+#-------------------------------------------------------------------------------
+
+# Not expected to be in the environment
+bt211C=somedata
+
+echo "Args: $*"
+
+printenv
+
+exit
+
I’m calling the variables associated with this script 'bt211[ABC]' so they are easier to find in the environment listing. To prove that defining variables in the script does not affect the environment we define one (bt211C) for later examination. We then echo any arguments given to the script. Finally we use printenv to show the environment.
We invoke the script bash21_ex1.sh preceded by bt211A=42 which will cause an environment variable of that name to be created in the process that is initiated
+
We give the script two arguments arg1 and arg2 and we add in another variable assignment bt211B=99. This should be placed in the environment now that set -k is enabled; if it weren’t then this string would just be treated as an argument
+
The output from the script (arguments and data from printenv) is piped through grep which selects anything that starts with 'Args' or 'bt211'
+
We see the two arguments echoed by the script and the two environment variables - the other variable bt211C is not shown because it is not an environment variable.
+
+
As with all of the single-letter options to set this one can be turned off again with 'set +k' (a little counter-intuitive, but that’s how it works).
+
Using environment variables
+
You will probably have seen references to environment variables when reading man pages. We have already seen how awk (gawk) can be made to behave differently when given certain variables such as AWKPATH. The same applies to a number of commands, and there is often a section describing such variables in the man page for the command.
+
Many commands depend on configuration files rather than environment variables nowadays, though it is not uncommon to see environment variables being used as a way to indicate a non-standard location for the configuration files. For example, the GNU Privacy Guard gpg command uses GNUPGHOME to specify a directory to be used instead of the default '~/.gnupg'. Also with the command-line tool for the PostgreSQL database, psql, there are several environment variables that can be set to provide defaults if necessary, for example: PGDATABASE, PGHOST, PGPORT and PGUSER.
+
In general environment variables are used:
+
+
To pass information about the login environment, such as the particular shell, the desktop and the user. For example, 'SHELL' contains the current shell (such as /bin/bash), 'DESKTOP_SESSION' defines the chosen desktop environment (such as xfce) and 'USER' defines the current username. These values are created during login and can be controlled where appropriate by the shell’s configuration files.
+
To pass relevant information to scripts, commands and programs. These variables can be set in the shell’s configuration file(s) or on the command line, either temporarily or permanently. We have seen the ways we can set environment variables permanently and temporarily.
+
+
However, it can be argued that any complex software system is better controlled through configuration files than through environment variables. It is common to see the YAML or JSON formats being used to set up configuration files, as well as other file formats. This method allows many settings to be controlled in one place, whereas using environment variables would require many separate variable definitions. On the other hand, environment variables are simpler to manage than having to deal with YAML or JSON formats.
+
Examples
+
Various ways of displaying the environment
+
#!/usr/bin/env bash
+
+#-------------------------------------------------------------------------------
+# Example 2 for Bash Tips show 21: the environment
+#-------------------------------------------------------------------------------
+
+BTversion='21'
+export BTversion
+
+echo "** Using 'grep' with 'env'"
+env | grep -E '(EDITOR|SHELL|BTversion)='
+echo
+
+echo "** Using 'printenv' with arguments"
+printenv EDITOR SHELL BTversion
+echo
+
+echo "** Using 'grep' with 'export'"
+export | grep -E '(EDITOR|SHELL|BTversion)='
+echo
+
+echo "** Using 'grep' with 'declare'"
+declare -x | grep -E '(EDITOR|SHELL|BTversion)='
+
+exit
+
Running the script (bash21_ex2.sh) generates the following output:
+
** Using 'grep' with 'env'
+SHELL=/bin/bash
+EDITOR=/usr/bin/vim
+BTversion=21
+
+** Using 'printenv' with arguments
+/usr/bin/vim
+/bin/bash
+21
+
+** Using 'grep' with 'export'
+declare -x BTversion="21"
+declare -x EDITOR="/usr/bin/vim"
+declare -x SHELL="/bin/bash"
+
+** Using 'grep' with 'declare'
+declare -x BTversion="21"
+declare -x EDITOR="/usr/bin/vim"
+declare -x SHELL="/bin/bash"
+
+
This example shows how environment variable values can be examined with env, printenv, export and declare. I will leave you to investigate set if you wish, though it’s not the ideal way to find such information.
+
Accessing the environment in an awk script
+
#!/usr/bin/awk -f
+
+#-------------------------------------------------------------------------------
+# Example 3 for Bash Tips show 21: printing the environment in Awk
+#-------------------------------------------------------------------------------
+
+BEGIN{
+ for (n in ENVIRON)
+ printf "ENVIRON[%s]=%s\n",n,ENVIRON[n]
+}
+
Running the script (bash21_ex3.awk) generates the following output in my particular case:
If you were to run the above script yourself you would see different values (!) and very likely a lot more of them.
+
Nerdy digression! Ignore if not interested! The way I demonstrate scripts for HPR shows is complicated since I usually run the scripts from the notes while they are being rendered to be sure that the output I show is really correct! This one was actually run over ssh under the local user hprdemo, which has been tailored for such demonstrations, so the environment is not typical.
+
Passing temporary environment variables
+
#!/usr/bin/env bash
+
+#-------------------------------------------------------------------------------
+# Example 4 for Bash Tips show 21: a way of showing environment variables
+#-------------------------------------------------------------------------------
+
+#
+# We expect one or more arguments
+#
+if [[ $# = 0 ]]; then
+ echo "Usage: $0 variable_name"
+ exit 1
+fi
+
+#
+# Loop through the arguments reporting their attributes with 'declare'
+#
+for arg; do
+ declare -p "$arg"
+done
+
+exit
+
This simple script (bash21_ex4.sh) allows the demonstration of the existence of selected variables in the environment. First we look at the SHELL variable, managed by Bash:
We will be looking at the configuration files that can be used to control your instance of Bash in a later show. The following example (bash21_ex5.sh) shows some of the environment variables I have defined in mine:
+
#-------------------------------------------------------------------------------
+# Example 5 for Bash Tips show 21: a few things exported in my .bashrc
+#-------------------------------------------------------------------------------
+#
+# The PATH variable gets my local ~/bin directory added to it
+#
+export PATH="${PATH}:$HOME/bin"
+#
+# The above is the older way of doing this. It is possible to write the
+# following using '+=' to concatenate a string onto a variable:
+# export PATH+=":$HOME/bin"
+
+#
+# Some tools need a default editor. The only one for me is Vim
+#
+export EDITOR=/usr/bin/vim
+export VISUAL=/usr/bin/vim
+
+
A simple configuration file for a Bash script
+
Note: the following example is quite detailed and somewhat convoluted. You might prefer to skip it; you will probably not lose much by doing so!
+
In the script I use to manage episodes I submit to HPR I use a simple configuration file. The script was begun in 2013 and was intended to be entirely implemented in Bash. At that time I came up with the idea of creating a file of export commands to define a collection of variables. The following is an example called bash21_ex6_1.cfg which contains settings for this episode of Bash Tips (slightly edited):
This idea works in a limited way. Using the 'source' command on the file will cause all of the export statements to be obeyed and the variables will be placed in the environment. There is one exception though; the definition of '_CFG_FILES' does not result in an environment variable because it’s an array and Bash does not support arrays in the environment. However, an array is created as an ordinary variable.
+
Originally I expected that I would need to access these environment variables in sub-processes or using Awk or Perl scripts. In the latter two cases the variables must be in the environment, but I found I didn’t need to do this in fact.
+
The demonstration script bash21_ex6.sh takes these variables and generates a new configuration file from them using declare statements:
+
#!/usr/bin/env bash
+
+#-------------------------------------------------------------------------------
+# Example 6 for Bash Tips show 21: a poor way to make a configuration file
+#-------------------------------------------------------------------------------
+
+#
+# Example configuration file using 'export'
+#
+CFG1='bash21_ex6_1.cfg'
+if [[ ! -e $CFG1 ]]; then
+ echo "Unable to find $CFG1"
+ exit 1
+fi
+
+#
+# Alternative configuration file using 'declare', converted from the other one
+#
+CFG2='bash21_ex6_2.cfg'
+
+#
+# Strip out all of the 'export' commands with 'sed' in a process substitution,
+# turning the lines into simple variable declarations. Use 'source' to obey
+# all of the resulting commands
+#
+source <(sed 's/^export //' $CFG1)
+
+#
+# Scan the (simple) variables beginning with '_CFG_' and convert them into
+# a portable form by saving the output of 'declare -p'
+#
+declare -p "${!_CFG_@}" > $CFG2
+
+#
+# Now next time we can 'source' this file instead when we want the variables
+#
+cat $CFG2
+
+exit
+
The variable 'CFG1' contains the name of the file of export commands, bash21_ex6_1.cfg. Rather than placing all of these variables into the environment the script strips the 'export' string from each line, making them simple assignments. The result is processed using the source command, taking its input from a process substitution running 'sed', and then the second configuration file is created, bash21_ex6_2.cfg.
+
The declare command needs explanation:
+
declare -p "${!_CFG_@}" > $CFG2
+
+
+
It uses 'declare -p' to print out declare statements, redirecting them to the new configuration file whose name is in variable 'CFG2'. The expression used to find all the variables: "${!_CFG_@}" uses a Bash parameter expansion feature which we looked at back in HPR episode 1648. It returns the names of all variables whose names begin with '_CFG_'. The expression ends with '@' which (like when expanding arrays) causes a list of distinct arguments to be generated rather than one single long string.
+
The script lists the contents of the file bash21_ex6_2.cfg to demonstrate what has happened.
Note how nothing is marked with '-x' because they had not been exported to the environment, especially the array (which can’t be!). Note that the array has been handled properly by 'declare -p' and the output file could be used to backup and restore this array. This is a safer format than the original file of assignments.
+
+
diff --git a/eps/hpr3039/hpr3039_config.js b/eps/hpr3039/hpr3039_config.js
new file mode 100755
index 0000000..4ea8cc6
--- /dev/null
+++ b/eps/hpr3039/hpr3039_config.js
@@ -0,0 +1,215 @@
+/* Magic Mirror Config Sample
+ *
+ * By Michael Teeuw http://michaelteeuw.nl
+ * MIT Licensed.
+ *
+ * For more information how you can configurate this file
+ * See https://github.com/MichMich/MagicMirror#configuration
+ *
+ * This is a modified version of my live file with API keys and similar removed. To be used in the
+ * HPR show "Making a Raspberry Pi status display"
+ *
+ */
+
+var config = {
+ address: "0.0.0.0", // Address to listen on, can be:
+ // - "localhost", "127.0.0.1", "::1" to listen on loopback interface
+ // - another specific IPv4/6 to listen on a specific interface
+ // - "", "0.0.0.0", "::" to listen on any interface
+ // Default, when address config is left out, is "localhost"
+ port: 8080,
+ // Modified for MMM-Remote-Control
+ ipWhitelist: ["127.0.0.1", "::ffff:127.0.0.1", "::1", "192.168.0.0/24", "::ffff:192.168.0.0/24"],
+ // Set [] to allow all IP addresses or add a specific IPv4 of 192.168.1.5
+ // : ["127.0.0.1", "::ffff:127.0.0.1", "::1", "::ffff:192.168.1.5"], or
+ // IPv4 range of 192.168.3.0 --> 192.168.3.15 use CIDR format
+ // : ["127.0.0.1", "::ffff:127.0.0.1", "::1", "::ffff:192.168.3.0/28"],
+
+ language: "en",
+ timeFormat: 24,
+ units: "metric",
+
+ modules: [
+ // Update Notification
+ {
+ module: "updatenotification",
+ position: "top_bar"
+ },
+ // Clock
+ {
+ module: "clock",
+ position: "top_left",
+ showWeek: true,
+ timezone: 'Europe/London'
+ },
+ // Calendar
+ {
+ module: "calendar",
+ header: "Calendar",
+ position: "top_left",
+ config: {
+ colored: true,
+ maxTitleLength: 30,
+ fade: false,
+ calendars: [{
+ // Secret address. Go to "Settings" in the calendar, click on the particular
+ // calendar that's wanted, and scroll down to "Integrate Calendar"
+ name: "Google Calendar",
+ url: "https://calendar.google.com/calendar/ical/dave.morriss%40gmail.com/private-##########/basic.ics",
+ symbol: "calendar-check",
+ color: "#825BFF" // violet-ish
+ },
+ {
+ // Calendar uses repeated 'RDATE' entries, which this iCal parser
+ // doesn't seem to recognise. Only the next event is visible, and
+ // the calendar has to be refreshed *after* the event has passed.
+ name: "HPR Community News recordings",
+ url: "http://hackerpublicradio.org/HPR_Community_News_schedule.ics",
+ symbol: "calendar-check",
+ color: "#C465A7" // purple
+ },
+ {
+ name: "Bank Holidays England and Wales",
+ url: "https://www.gov.uk/bank-holidays/england-and-wales.ics",
+ symbol: "calendar",
+ color: "#0040FF" // medium blue
+ },
+ {
+ name: "Bank Holidays Scotland",
+ url: "https://www.gov.uk/bank-holidays/scotland.ics",
+ symbol: "calendar",
+ color: "#C05A58" // brownish
+ },
+ {
+ name: "Bank Holidays NI",
+ url: "https://www.gov.uk/bank-holidays/northern-ireland.ics",
+ symbol: "calendar",
+ color: "#006600" // darker green
+ }
+ ]
+ }
+ },
+ // Current Weather
+ {
+ module: "currentweather",
+ position: "top_right",
+ config: {
+ location: "City of Edinburgh",
+ locationID: "3333229",
+ appid: "############################",
+ useLocationAsHeader: true
+ }
+ },
+ // Weather Forecast
+ {
+ module: "weatherforecast",
+ position: "top_right",
+ header: "Weather Forecast",
+ config: {
+ location: "City of Edinburgh",
+ locationID: "3333229",
+ appid: "############################",
+ useLocationAsHeader: true,
+ fade: false,
+ colored: true,
+ }
+ },
+ // News Feed
+ {
+ module: "newsfeed",
+ position: "bottom_bar",
+ config: {
+ feeds: [{
+ title: "Guardian World",
+ url: "https://www.theguardian.com/world/rss",
+ },
+ {
+ title: "BBC World",
+ url: "http://feeds.bbci.co.uk/news/world/rss.xml",
+ },
+ {
+ title: "BBC UK",
+ url: "http://feeds.bbci.co.uk/news/uk/rss.xml",
+ },
+ ],
+ showSourceTitle: true,
+ showPublishDate: true,
+ broadcastNewsFeeds: true,
+ broadcastNewsUpdates: true
+ }
+ },
+ // MMM-MQTT
+ {
+ module: 'MMM-MQTT',
+ position: 'top_left',
+ header: 'MQTT',
+ config: {
+ logging: false,
+ useWildcards: false,
+ mqttServers: [{
+ address: 'localhost', // Server address or IP address
+ port: '1883', // Port number if other than default
+ subscriptions: [{
+ // HPR pending comments
+ topic: 'mm2/comments',
+ label: 'Comments:',
+ sortOrder: 10,
+ maxAgeSeconds: 60
+ },
+ {
+ // HPR pending shows
+ topic: 'mm2/shows',
+ label: 'Shows :',
+ sortOrder: 20,
+ maxAgeSeconds: 60
+ },
+ {
+ topic: 'mm2/info',
+ label: 'Info :',
+ sortOrder: 30,
+ maxAgeSeconds: 60
+ },
+ {
+ topic: 'mm2/urgent',
+ label: 'Urgent :',
+ sortOrder: 40,
+ maxAgeSeconds: 120
+ },
+ ]
+ }],
+ }
+ },
+ // MMM-LothianBuses
+ {
+ module: 'MMM-LothianBuses',
+ header: 'Buses',
+ position: 'top_right',
+ config: {
+ apiKey: '############################################',
+ busStopIds: [
+ '36237526',
+ ]
+ }
+ },
+ // MMM-Remote-Control
+ {
+ module: 'MMM-Remote-Control',
+ config: {
+ customCommand: {}, // Optional, See "Using Custom Commands" below
+ customMenu: "custom_menu.json", // Optional, See "Custom Menu Items" below
+ showModuleApiMenu: true, // Optional, Enable the Module Controls menu
+ }
+ },
+
+ ]
+
+};
+
+/*************** DO NOT EDIT THE LINE BELOW ***************/
+if (typeof module !== "undefined") {
+ module.exports = config;
+}
+
+/*
+vim: set syntax=javascript ts=8 sw=4 ai et tw=100 fo=tcrqn21:
+*/
diff --git a/eps/hpr3039/hpr3039_custom.css b/eps/hpr3039/hpr3039_custom.css
new file mode 100755
index 0000000..9dca127
--- /dev/null
+++ b/eps/hpr3039/hpr3039_custom.css
@@ -0,0 +1,94 @@
+/* Some ideas based on a YT tutorial https://www.youtube.com/watch?v=OXpJylI3rG */
+
+body {
+ color: #000;
+ /* Seascape with rocks */
+ background-image: url("115514-1440x900.jpg");
+
+ // Alternative background colour before the image was chosen
+ background-color: #CCFFCC;
+ background-position: center;
+ background-size: cover;
+}
+
+/*
+ * Change the background and forground colours throughout the modules so that each is displayed in
+ * a white semi-transparent panel with dark lettering.
+ */
+
+.module.alert {
+ color: rgba(255, 127, 0, 0.6);
+}
+
+.module.updatenotification {
+ color: rgba(255, 255, 255, 0.6);
+}
+
+.module.calendar {
+ background-color: rgba(255, 255, 255, 0.9);
+ color: #000;
+ border-radius: 8px;
+ padding: 8px;
+}
+
+.module.clock {
+ background-color: rgba(255, 255, 255, 0.9);
+ color: #000;
+ border-radius: 8px;
+ padding: 8px;
+}
+
+.module.currentweather {
+ background-color: rgba(255, 255, 255, 0.9);
+ color: #000;
+ border-radius: 8px;
+ padding: 8px;
+}
+
+.module.weatherforecast {
+ background-color: rgba(255, 255, 255, 0.9);
+ color: #000;
+ border-radius: 8px;
+ padding: 8px;
+}
+
+.module.MMM-MQTT {
+ background-color: rgba(255, 255, 255, 0.9);
+ color: #000;
+ border-radius: 8px;
+ padding: 8px;
+}
+
+.module.newsfeed {
+ background-color: rgba(255, 255, 255, 0.9);
+ color: #000;
+ border-radius: 8px;
+ padding: 8px;
+}
+
+.module.MMM-LothianBuses {
+ background-color: rgba(255, 255, 255, 0.9);
+ color: #000;
+ border-radius: 8px;
+ padding: 8px;
+}
+
+.dimmed {
+ color: #000;
+}
+
+.normal {
+ color: #000;
+}
+
+.bright {
+ color: #000;
+}
+
+.header {
+ color: #000;
+}
+
+/*
+vim: set syntax=css ts=8 sw=4 ai et tw=100 fo=tcrqn21:
+*/
diff --git a/eps/hpr3039/hpr3039_full_shownotes.html b/eps/hpr3039/hpr3039_full_shownotes.html
new file mode 100755
index 0000000..f5ed618
--- /dev/null
+++ b/eps/hpr3039/hpr3039_full_shownotes.html
@@ -0,0 +1,164 @@
+
+
+
+
+
+
+
+ Making a Raspberry Pi status display (HPR Show 3039)
+
+
+
+
+
+
+
+
+
Making a Raspberry Pi status display (HPR Show 3039)
+
A project making use of my Pi 3A+, an old monitor and MagicMirror2
+
Dave Morriss
+
+
+
+
+
+
+
Table of Contents
+
+
+
Introduction
+
I have had a project on my To Do list for a while: to make a status display from a Raspberry Pi. My vision was to show the state of various things including some HPR stuff, and I had imagined setting up a Pi with a monitor and controlling it over SSH.
+
I started on the project over the Christmas period 2019. I have a Raspberry Pi 3A+, which is a sort of souped-up Pi Zero, which I bought on a whim and hadn’t found a use for (Yannick reviewed this RPi model in show 2711). I also had an old square Dell monitor from about 15 years ago which still worked (at least to begin with).
+
I had imagined I’d write some software of my own with a web front end which ran various tasks to monitor things.
+
However, in my researches I came across MagicMirror2 which I thought I might be able to use instead of writing my own thing.
+
MagicMirror2
+
The purpose of MagicMirror2 (MM2) is that it can be used to make a Smart Mirror. This is a physical mirror with a monitor mounted behind a two-way mirror panel made of glass or acrylic. While acting as a mirror the embedded monitor also shows information such as time, date and weather. There are many smart mirror building projects on YouTube, and various websites.
+
The MM2 software is written in JavaScript for use with Node.js and Electron. It has been designed in a modular form with a set of generally useful modules and an API which makes writing further modules by the community possible. There is an astonishing range of third-party modules available.
+
Requirements
+
The MM2 software is designed to run on a Raspberry Pi (model 2, 3 or 4) and needs the full Raspbian, not the Lite version.
+
Installation
+
This is dealt with well in the documentation. The recommended way to install the main application is using curl to download and pipe into Bash running under sudo. This did not feel right to me, so I worked my way through the script by hand to see what it was going to do. In the end the installation looked pretty safe (as far as I could tell).
+
It’s necessary to install Node.js through apt, clone the MM2 Github repository and to use the npm utility to install further components.
+
I had set up the Pi to run SSH with a shared key (as I do for all my RPi machines - see Ken Fallon’s show 2356). Controlling MM2 this way is a little awkward, so I followed advice to install pm2, a process manager for Node.js. This makes it easier to start and stop MM2.
+
Configuration
+
There are two key files which contain configuration information to manage MM2: config.js and custom.css.
+
config.js
+
The first file is MagicMirror/config/config.js which is a JavaScript data structure containing configuration details for MM2 itself and each of the modules being used. Depending on how you install MM2 this will contain the contents of a file called config.js.sample for the default modules shipped with the application.
+
It is amazingly easy to omit a comma or closing brace in the config.js file, and luckily there is a command that will check the file for you:
+
npm run config:check
+
custom.css
+
The second file is MagicMirror/css/custom.css which contains the CSS that controls the look of the interface, and can be changed to modify the representation of the modules. The name of this file is configurable, but I have left it as the default.
+
I have modified my copy of this file because I am not making a mirror but a display on a monitor. Each of the modules displays in a semi-transparent box over a static background image.
+
There are pitfalls in doing this because not all modules are as well-behaved with their use of CSS as I had expected. They sometimes assume the background is black with white lettering, rather than white with black, and this can cause some information to be rendered as white on white!
+
I have included my config.js and custom.css files along with this episode in case you want to look at them.
+
Modules
+
Currently I am using some of the default modules and a number of third-party ones. I will describe these briefly here. Each of the default modules is documented on the MM2 website showing the properties to be added to the modules array in config.js, and the third-party ones have documentation on their GitHub pages.
+
In general the module configurations look something like:
The modules array is required and holds a configuration object per module. These vary according to the needs of the module. Don’t forget the commas between the elements!
+
Default modules are in MagicMirror/modules/default/ and third-party ones need to be installed in MagicMirror/modules/.
+
Update Notification
+
This default module is used to display the availability of updates. I have positioned mine at the top of the screen, and have found it useful to get a reminder when updates are available.
+
Calendar
+
You can merge multiple iCal calendars into this default module, as I have done. I have also included my Google Calendar. Sadly you can’t just point it at an iCal file, which I would have liked to have done, since I have a static multi-year Astronomical calendar I’d like to display.
+
Clock
+
A default module which shows a digital or analogue clock with a date, in 12- or 24-hour format. Quite a lot of configuration options!
+
Current Weather
+
Default module which uses https://home.openweathermap.org/ for details and needs an account to get the necessary API key. Many configuration options.
+
News Feed
+
Default module which is capable of pulling news headlines from multiple sources. I have selected the Guardian and two BBC feeds. This took a while to set up, searching for the exact feeds I wanted and tuning them to my liking.
+
Weather Forecast
+
Another default module which is similar to Current Weather, uses OpenWeatherMap and needs an API key.
I use MQTT on my home network. I did an HPR show in 2016 about making a device to light RGB LEDs to signal when there were new HPR shows or comments needing attention. I use cron jobs to check for events and send MQTT messages to various brokers.
+
I wanted to be able to make the MM2 system into an MQTT message recipient, so first I installed the Mosquitto broker then set up the MMM-MQTT module to accept messages using the following topics: mm2/comments and mm2/shows. I also included topics mm2/info, mm2/urgent with no particular plan in mind at the time, and I haven’t yet used them!
+
The comment and show counts are quite useful. The LED display shows that there is work to do, but can’t indicate how many shows or comments. The MM2 display can do this however.
This module interfaces to the API offered by the local bus company. I needed to apply for a (free) API key which took a while to arrive since the allocation process seems to be manual. I set up the module to monitor the bus stop nearest my house where the services go into Edinburgh. This seems like a useful feature. I can also see this data in an app on my phone but having a regularly updated status display seems like a good idea.
+
There are other similar modules available for MM2 catering for other transport systems.
This module allows me to view and control the MM2 installation through my phone (or any local PC). It sets up an HTTP interface which I can connect to with a browser and do things like check for updates, turn on and off (selected) modules, shut down the Pi, and so forth. I have not explored all of the possibilities yet.
+
I have created a desktop bookmark on my phone that takes me to the interface for this module. I have a rather old version of a Cyanogen Mod derivative (ResurrectionRemix) on my phone, and the browsers available to it are rather unpleasant, resulting in problems with access to this MM2 module.
+
+Running system. The original monitor died but there was another little-used LG monitor available. I cheated here and grabbed a screenshot from a browser because my photos were pretty terrible!
+
Further Developments
+
+
I’m happy with MagicMirror2 to provide my status display, so I don’t think I’ll be moving away from it.
+
I’d like to tune some of the configuration and CSS settings a bit more. I suspect I still have hidden text somewhere in the display because I haven’t changed the colour of all text.
+
I’d like to learn how to write my own module. I have a static picture behind the module panels as seen in the screenshot, but I’d like to rotate between a set of pictures. There are modules that do something like this so I could adapt one of these.
+
I’d like to automate the turning off of the monitor. At the moment I use the front power switch to do this before I go to bed, but I’d like to completely remove power. To do this I’d like to set up a remote-control switch which is controlled by MQTT from the Pi that’s running MM2.
+
At the moment the Pi is simply resting on the shelf where the monitor is set up. This monitor has a VGA mounting point on the back, so I’d like to use it to hold the Pi in the case it’s in. I might need to make some 3D printed brackets for this.
+
+
diff --git a/eps/hpr3039/hpr3039_screenshot.png b/eps/hpr3039/hpr3039_screenshot.png
new file mode 100755
index 0000000..2223b7e
Binary files /dev/null and b/eps/hpr3039/hpr3039_screenshot.png differ
diff --git a/eps/hpr3063/hpr3063_LAMY_vista_1.png b/eps/hpr3063/hpr3063_LAMY_vista_1.png
new file mode 100755
index 0000000..8885626
Binary files /dev/null and b/eps/hpr3063/hpr3063_LAMY_vista_1.png differ
diff --git a/eps/hpr3063/hpr3063_LAMY_vista_2.png b/eps/hpr3063/hpr3063_LAMY_vista_2.png
new file mode 100755
index 0000000..23de043
Binary files /dev/null and b/eps/hpr3063/hpr3063_LAMY_vista_2.png differ
diff --git a/eps/hpr3063/hpr3063_LAMY_vista_3.png b/eps/hpr3063/hpr3063_LAMY_vista_3.png
new file mode 100755
index 0000000..29b3a15
Binary files /dev/null and b/eps/hpr3063/hpr3063_LAMY_vista_3.png differ
diff --git a/eps/hpr3063/hpr3063_LAMY_vista_4.png b/eps/hpr3063/hpr3063_LAMY_vista_4.png
new file mode 100755
index 0000000..fafb9d6
Binary files /dev/null and b/eps/hpr3063/hpr3063_LAMY_vista_4.png differ
diff --git a/eps/hpr3063/hpr3063_Pilot_Custom_Heritage_92_1.png b/eps/hpr3063/hpr3063_Pilot_Custom_Heritage_92_1.png
new file mode 100755
index 0000000..83e3cd8
Binary files /dev/null and b/eps/hpr3063/hpr3063_Pilot_Custom_Heritage_92_1.png differ
diff --git a/eps/hpr3063/hpr3063_Pilot_Custom_Heritage_92_2.png b/eps/hpr3063/hpr3063_Pilot_Custom_Heritage_92_2.png
new file mode 100755
index 0000000..f9f391f
Binary files /dev/null and b/eps/hpr3063/hpr3063_Pilot_Custom_Heritage_92_2.png differ
diff --git a/eps/hpr3063/hpr3063_Pilot_Custom_Heritage_92_3.png b/eps/hpr3063/hpr3063_Pilot_Custom_Heritage_92_3.png
new file mode 100755
index 0000000..ca7521a
Binary files /dev/null and b/eps/hpr3063/hpr3063_Pilot_Custom_Heritage_92_3.png differ
diff --git a/eps/hpr3063/hpr3063_Pilot_Custom_Heritage_92_4.png b/eps/hpr3063/hpr3063_Pilot_Custom_Heritage_92_4.png
new file mode 100755
index 0000000..5278318
Binary files /dev/null and b/eps/hpr3063/hpr3063_Pilot_Custom_Heritage_92_4.png differ
diff --git a/eps/hpr3063/hpr3063_Pilot_Custom_Heritage_92_5.png b/eps/hpr3063/hpr3063_Pilot_Custom_Heritage_92_5.png
new file mode 100755
index 0000000..482b6dd
Binary files /dev/null and b/eps/hpr3063/hpr3063_Pilot_Custom_Heritage_92_5.png differ
diff --git a/eps/hpr3063/hpr3063_TWSBI_VAC_700_1.png b/eps/hpr3063/hpr3063_TWSBI_VAC_700_1.png
new file mode 100755
index 0000000..8768cd4
Binary files /dev/null and b/eps/hpr3063/hpr3063_TWSBI_VAC_700_1.png differ
diff --git a/eps/hpr3063/hpr3063_TWSBI_VAC_700_2.png b/eps/hpr3063/hpr3063_TWSBI_VAC_700_2.png
new file mode 100755
index 0000000..385dfe4
Binary files /dev/null and b/eps/hpr3063/hpr3063_TWSBI_VAC_700_2.png differ
diff --git a/eps/hpr3063/hpr3063_TWSBI_VAC_700_3.png b/eps/hpr3063/hpr3063_TWSBI_VAC_700_3.png
new file mode 100755
index 0000000..0d50d25
Binary files /dev/null and b/eps/hpr3063/hpr3063_TWSBI_VAC_700_3.png differ
diff --git a/eps/hpr3063/hpr3063_TWSBI_VAC_700_4.png b/eps/hpr3063/hpr3063_TWSBI_VAC_700_4.png
new file mode 100755
index 0000000..d0eb31e
Binary files /dev/null and b/eps/hpr3063/hpr3063_TWSBI_VAC_700_4.png differ
diff --git a/eps/hpr3063/hpr3063_TWSBI_VAC_700_5.png b/eps/hpr3063/hpr3063_TWSBI_VAC_700_5.png
new file mode 100755
index 0000000..02749f2
Binary files /dev/null and b/eps/hpr3063/hpr3063_TWSBI_VAC_700_5.png differ
diff --git a/eps/hpr3063/hpr3063_Troika_Construction_1.png b/eps/hpr3063/hpr3063_Troika_Construction_1.png
new file mode 100755
index 0000000..f6db4c5
Binary files /dev/null and b/eps/hpr3063/hpr3063_Troika_Construction_1.png differ
diff --git a/eps/hpr3063/hpr3063_Troika_Construction_2.png b/eps/hpr3063/hpr3063_Troika_Construction_2.png
new file mode 100755
index 0000000..73a737f
Binary files /dev/null and b/eps/hpr3063/hpr3063_Troika_Construction_2.png differ
diff --git a/eps/hpr3063/hpr3063_full_shownotes.html b/eps/hpr3063/hpr3063_full_shownotes.html
new file mode 100755
index 0000000..7575419
--- /dev/null
+++ b/eps/hpr3063/hpr3063_full_shownotes.html
@@ -0,0 +1,155 @@
+
+
+
+
+
+
+
+ Pens, pencils, paper and ink - 1 (HPR Show 3063)
+
+
+
+
+
+
+
+
+
Pens, pencils, paper and ink - 1 (HPR Show 3063)
+
Looking at a few more writing implements
+
Dave Morriss
+
+
+
+
+
+
+
Table of Contents
+
+
+
Introduction
+
It’s been over four years since I did a show about fountain pens. It was in the What’s in My Toolkit series entitled What’s in my case, show 1941 released on 2016-01-11.
+
I thought it might be appropriate to visit the subject once again. I want to tell you about some new pens and pencils I have acquired, some inks I am enjoying and some of the notebooks I have bought.
+
There’s too much for a single show here, so I’m making a mini-series of three shows. This also leaves the door open for more when the collection grows in the future!
+
Some new pens
+
I have not bought many pens since the last show, but have added one or two to my small collection and received a new pen as a present. I have not limited myself to fountain pens but also have a new ballpoint pen, a gel pen and some mechanical pencils. I’ll talk about a few of these in this episode.
+
LAMY vista
+
This is a German brand which seems to be available everywhere for a very reasonable price. I wanted to try one to see if I liked it.
+
This particular model is the vista (all lower case) which is a transparent (Demonstrator) version of the Safari fountain pen.
+
I know people who absolutely adore the Safari, but I’m only moderately enamoured. I’m not wild about the finger-grip part of the pen which is triangular in cross section. I don’t find it comfortable and dislike being forced to hold the pen in a particular way. Others find this one of the best features!
+
Although the nib is classified as Extra Fine it’s relatively coarse for my tastes. I have probably been spoiled by Japanese pens. What they define as Fine others would call Extra Fine. This German EF is what I’d call Medium!
+
The pen takes a cartridge or a converter. I bought the Z24 converter. Now the recommendation is the Z28 converter. I’m unsure of the difference.
+
The pen cost under £20 when I bought it in 2014, and the converter is under £5. The cap is a push fit, rather than screwing on. I find the pen will dry out moderately quickly if left with ink in it and the cap on.
+
+Picture: The LAMY vista with its cap on
+
+Picture: The LAMY vista with its cap off
+
+Picture: The LAMY vista nib close-up
+
+Picture: The LAMY vista writing sample
+
+
Note: In the audio I was confused about whether this pen is the Safari or the vista. They are essentially the same, but the Safari is not transparent.
+
+
TWSBI VAC 700
+
I mentioned the TWSBI brand in my last show, and spoke about the ECO, a good value piston filling pen. I still enjoy mine very much, but since then I have acquired the TWSBI VAC 700.
+
Unfortunately, this pen is no longer made (though a few stockists still seem to have them). There is a VAC Mini which has replaced the 700. Both are Demonstrator (transparent) pens. The 700 is clear acrylic and the Mini is available in clear or Smoke acrylic.
+
The TWSBI VAC pens have an unusual vacuum filling mechanism.
+
I bought a filling accessory for this pen, which consists of an ink bottle which screws onto the pen, the VAC-20A. This allows the chamber to be filled almost totally, where this can be a little difficult if filling from a standard ink bottle. The ink capacity is large for a fountain pen so filling it to capacity is desirable for extended usage.
+
There is a YouTube video from The Goulet Pen Company showing the filling of this pen with the VAC-20A ink bottle if you are interested. The bottle shown in the video is the VAC-20 which only fits the VAC 700. The VAC-20A, which I have fits the VAC Mini as well.
+
+Picture: The TWSBI VAC 700 with its cap on
+
+Picture: The TWSBI VAC 700 with its cap off
+
+Picture: The TWSBI VAC 700 nib close-up
+
+Picture: The TWSBI VAC-20A ink bottle
+
+Picture: The TWSBI VAC 700 writing sample
+
This is quite a large pen, but I find it very comfortable to use. Mine has an Extra Fine nib which I really like. Since the pen originates from Taiwan this seems to support the theory that fountain pens from this part of the world tend to have finer nibs than European pens.
+
The two models of TWSBI VAC pens have a valve on the end which releases the plunger. This needs to be slackened off while writing because it allows air to flow into the barrel and ink to flow to the nib. This is an unusual feature but it means with the valve closed the likelihood of ink leakage is very small indeed. Some people might not like this feature since it’s an extra thing to remember.
+
My pen originated from my son, from whom I bought it. I don’t remember how much it cost originally. If you wanted to buy one now yourself then you’d probably pay in the region of £75. The VAC-20A is under £15 (at the time of writing). The VAC Mini is under £60.
+
Pilot Custom Heritage 92
+
This is a piston filler from Japan. My son, who was in Japan in 2017, bought it for me as a present. This pen tends to be fairly expensive in the UK (and presumably elsewhere) but is priced lower in Japan itself.
+
This is an acrylic Demonstrator pen. In the UK only the clear version is available, but this one uses blue acrylic. The nib is fine, but I am not sure what it’s made of. The UK version uses a gold tip but I’m not sure whether this one does.
+
The pen has a quality feel to it, and writes beautifully. It’s not overly large, and to me feels more comfortable with the cap posted on the barrel.
+
+Picture: The Pilot Custom Heritage 92 with its cap on
+
+Picture: The Pilot Custom Heritage 92 with its cap off
+
+Picture: The Pilot Custom Heritage 92 nib close-up
+
My son and his girlfriend make leather items and they made me a leather carrying case to go with the pen.
+
+Picture: Hand-made pen case from my son
+
+Picture: The Pilot Custom Heritage 92 writing sample
+
Note: I will be looking at inks such as the J. Herbin Bleu Pervenche in a later show.
+
The UK version of this pen – clear with a gold nib – costs around £175. I don’t believe that it costs anywhere near as much in Japan!
+
Ballpoint Pens
+
I don’t have many of these because on the whole they don’t write very well nor do they suit my handwriting. I normally prefer fountain pens, gel pens and rollerball pens.
+
Troika Construction
+
This is something of a novelty pen I think. It is made of metal, has a 6-sided barrel with a twist tip for extending and retracting the ballpoint. The barrel can be used as a ruler and has imperial and metric measurements. It also contains a spirit level. The end of the pen is fitted with a pad that activates touch screens. When unscrewed it reveals a double-ended screwdriver.
+
As a pen it’s nothing special. It takes small D1 refills which don’t contain a lot of ink and do not write all that well. I don’t really know why I bought it!!
+
+Picture: The Troika Construction ready for use
+
+Picture: The Troika Construction showing the screwdriver
+
Conclusion
+
These fountain pens included two of my favourites, and one I’m learning to love more! I do use the Troika ballpoint as a pen to write brief notes or shopping lists but I wouldn’t recommend it for general use.
+
I’ll cover more fountain pens and related matters in the next episode.
+
+
diff --git a/eps/hpr3092/hpr3092_Italix_Parsons_Essential_1.png b/eps/hpr3092/hpr3092_Italix_Parsons_Essential_1.png
new file mode 100755
index 0000000..22bbaa3
Binary files /dev/null and b/eps/hpr3092/hpr3092_Italix_Parsons_Essential_1.png differ
diff --git a/eps/hpr3092/hpr3092_Italix_Parsons_Essential_2.png b/eps/hpr3092/hpr3092_Italix_Parsons_Essential_2.png
new file mode 100755
index 0000000..85cdbd3
Binary files /dev/null and b/eps/hpr3092/hpr3092_Italix_Parsons_Essential_2.png differ
diff --git a/eps/hpr3092/hpr3092_Italix_Parsons_Essential_3.png b/eps/hpr3092/hpr3092_Italix_Parsons_Essential_3.png
new file mode 100755
index 0000000..df784b7
Binary files /dev/null and b/eps/hpr3092/hpr3092_Italix_Parsons_Essential_3.png differ
diff --git a/eps/hpr3092/hpr3092_Italix_Parsons_Essential_4.png b/eps/hpr3092/hpr3092_Italix_Parsons_Essential_4.png
new file mode 100755
index 0000000..875407d
Binary files /dev/null and b/eps/hpr3092/hpr3092_Italix_Parsons_Essential_4.png differ
diff --git a/eps/hpr3092/hpr3092_Italix_Parsons_Essential_5.png b/eps/hpr3092/hpr3092_Italix_Parsons_Essential_5.png
new file mode 100755
index 0000000..7b8734f
Binary files /dev/null and b/eps/hpr3092/hpr3092_Italix_Parsons_Essential_5.png differ
diff --git a/eps/hpr3092/hpr3092_Kaweco_Classic_Sport_1.png b/eps/hpr3092/hpr3092_Kaweco_Classic_Sport_1.png
new file mode 100755
index 0000000..7bfc0d3
Binary files /dev/null and b/eps/hpr3092/hpr3092_Kaweco_Classic_Sport_1.png differ
diff --git a/eps/hpr3092/hpr3092_Kaweco_Classic_Sport_2.png b/eps/hpr3092/hpr3092_Kaweco_Classic_Sport_2.png
new file mode 100755
index 0000000..5e84959
Binary files /dev/null and b/eps/hpr3092/hpr3092_Kaweco_Classic_Sport_2.png differ
diff --git a/eps/hpr3092/hpr3092_Kaweco_Classic_Sport_3.png b/eps/hpr3092/hpr3092_Kaweco_Classic_Sport_3.png
new file mode 100755
index 0000000..97c6ce3
Binary files /dev/null and b/eps/hpr3092/hpr3092_Kaweco_Classic_Sport_3.png differ
diff --git a/eps/hpr3092/hpr3092_Kaweco_Classic_Sport_4.png b/eps/hpr3092/hpr3092_Kaweco_Classic_Sport_4.png
new file mode 100755
index 0000000..9512724
Binary files /dev/null and b/eps/hpr3092/hpr3092_Kaweco_Classic_Sport_4.png differ
diff --git a/eps/hpr3092/hpr3092_Kaweco_Classic_Sport_5.png b/eps/hpr3092/hpr3092_Kaweco_Classic_Sport_5.png
new file mode 100755
index 0000000..06a8450
Binary files /dev/null and b/eps/hpr3092/hpr3092_Kaweco_Classic_Sport_5.png differ
diff --git a/eps/hpr3092/hpr3092_Kaweco_Classic_Sport_6.png b/eps/hpr3092/hpr3092_Kaweco_Classic_Sport_6.png
new file mode 100755
index 0000000..9b610c7
Binary files /dev/null and b/eps/hpr3092/hpr3092_Kaweco_Classic_Sport_6.png differ
diff --git a/eps/hpr3092/hpr3092_Kuru_Toga_1.png b/eps/hpr3092/hpr3092_Kuru_Toga_1.png
new file mode 100755
index 0000000..00ce39a
Binary files /dev/null and b/eps/hpr3092/hpr3092_Kuru_Toga_1.png differ
diff --git a/eps/hpr3092/hpr3092_Kuru_Toga_2.png b/eps/hpr3092/hpr3092_Kuru_Toga_2.png
new file mode 100755
index 0000000..d162253
Binary files /dev/null and b/eps/hpr3092/hpr3092_Kuru_Toga_2.png differ
diff --git a/eps/hpr3092/hpr3092_Kuru_Toga_3.png b/eps/hpr3092/hpr3092_Kuru_Toga_3.png
new file mode 100755
index 0000000..5681269
Binary files /dev/null and b/eps/hpr3092/hpr3092_Kuru_Toga_3.png differ
diff --git a/eps/hpr3092/hpr3092_Platinum_Prefounte_1.png b/eps/hpr3092/hpr3092_Platinum_Prefounte_1.png
new file mode 100755
index 0000000..8ac4b0a
Binary files /dev/null and b/eps/hpr3092/hpr3092_Platinum_Prefounte_1.png differ
diff --git a/eps/hpr3092/hpr3092_Platinum_Prefounte_2.png b/eps/hpr3092/hpr3092_Platinum_Prefounte_2.png
new file mode 100755
index 0000000..f5d58de
Binary files /dev/null and b/eps/hpr3092/hpr3092_Platinum_Prefounte_2.png differ
diff --git a/eps/hpr3092/hpr3092_Platinum_Prefounte_3.png b/eps/hpr3092/hpr3092_Platinum_Prefounte_3.png
new file mode 100755
index 0000000..6c2af81
Binary files /dev/null and b/eps/hpr3092/hpr3092_Platinum_Prefounte_3.png differ
diff --git a/eps/hpr3092/hpr3092_Platinum_Prefounte_4.png b/eps/hpr3092/hpr3092_Platinum_Prefounte_4.png
new file mode 100755
index 0000000..e31d7e9
Binary files /dev/null and b/eps/hpr3092/hpr3092_Platinum_Prefounte_4.png differ
diff --git a/eps/hpr3092/hpr3092_Rhodia_paper_1.png b/eps/hpr3092/hpr3092_Rhodia_paper_1.png
new file mode 100755
index 0000000..a6a68ee
Binary files /dev/null and b/eps/hpr3092/hpr3092_Rhodia_paper_1.png differ
diff --git a/eps/hpr3092/hpr3092_Rhodia_paper_2.png b/eps/hpr3092/hpr3092_Rhodia_paper_2.png
new file mode 100755
index 0000000..185407a
Binary files /dev/null and b/eps/hpr3092/hpr3092_Rhodia_paper_2.png differ
diff --git a/eps/hpr3092/hpr3092_full_shownotes.html b/eps/hpr3092/hpr3092_full_shownotes.html
new file mode 100755
index 0000000..f1889c6
--- /dev/null
+++ b/eps/hpr3092/hpr3092_full_shownotes.html
@@ -0,0 +1,159 @@
+
+
+
+
+
+
+
+ Pens, pencils, paper and ink - 2 (HPR Show 3092)
+
+
+
+
+
+
+
+
+
Pens, pencils, paper and ink - 2 (HPR Show 3092)
+
Looking at more writing equipment
+
Dave Morriss
+
+
+
+
+
+
+
Table of Contents
+
+
+
Introduction
+
This is the second in a short series about pens, pencils, writing paper and ink. In this episode we will look at three more fountain pens (two lower-priced and one around £50), a mechanical pencil and some paper.
+
Kaweco Sport
+
This is a pen which is quite small when closed but which becomes a more normal size when open, and the oversized cap posted. I bought the black one with a fine nib. It takes a small international standard sized cartridge and there is a piston converter available for it too – which I have but haven’t started using yet. A pocket clip is available but doesn’t come as standard – I didn’t buy one of these since I tend to use a pen case.
+
At under £20 this is a good value pen that’s easy to keep in a pocket or other small container. The converter is around £5, and the basic clip under £2.50.
+
There are versions of this pen made of steel, aluminium and brass which I must say I would like to own. The brass one seems highly desirable to me, but I am holding off spending over £65 on such a thing!
+
+Picture: The Kaweco Classic Sport with its cap on
+
+Picture: The Kaweco Classic Sport with its cap off
+
+Picture: The Kaweco Classic Sport disassembled
+
+Picture: The Kaweco Classic Sport nib close-up
+
+Picture: The Kaweco Classic Sport compared to a LAMY vista
+
+Picture: The Kaweco Classic Sport writing sample
+
This is a pleasant pen to use. It’s small as mentioned, but the size seems normal when the cap is posted. The fine nib is a little “dry” for me, though it’s settling down as I use it. The meaning of “dry” in this context is that the ink doesn’t flow as well as it could. This can happen when a fountain pen is running out of ink, or has been left to dry out for a while (perhaps there’s dry ink in the feed and it needs to be cleaned out). This pen is relatively new, so it’s got plenty of ink and hasn’t dried out. The cap screws onto the barrel, so this should help reduce the likelihood of drying out.
+
Another reason for “dryness” is that the nib is not allowing ink to flow through it as it should. I bought a second-hand pen on eBay a few years ago where this was a serious problem. I learnt how to correct this problem by making the two “tines” of the nib move apart fractionally. I have the tools to deal with this as a consequence.
+
For the moment I shall continue using the Kaweco to see if the nib settles down of its own accord, and will report back in the next episode. If I feel it needs some maintenance I’ll describe what I did in that show.
+
Platinum Prefounte
+
This pen is standard size, and is fairly basic. I bought it to try it out, and the fact that it costs under £10 made such an experiment attractive. It’s a refillable cartridge pen, though I haven’t yet found a converter for it. I find it very good for the price. I bought a green model with the fine nib (0.3mm) as well as some green ink cartridges, and I am enjoying using it.
+
+Picture: The Platinum Prefounte with its cap on
+
+Picture: The Platinum Prefounte with its cap off
+
+Picture: The Platinum Prefounte nib close-up
+
+Picture: The Platinum Prefounte writing sample
+
The Prefounte is similar to another Platinum pen, the Preppy, which I mentioned in the first episode of this series as a good first pen to try out. They seem to use the same nib though they differ in the shape of the barrel, cap and clip. The Prefounte has a metal clip.
+
One of the selling points of the Prefounte is that the push-on cap seals very well, so much so that the pen does not dry out after being left unused for a year. I think the Preppy is similar in this respect, since mine hasn’t dried out, but it’s not used as one of the selling points like it is with the Prefounte.
+
I like the Prefounte (and the Preppy) and would recommend either as a first entry into fountain pens.
+
Italix Parson’s Essential
+
Years ago, when I was in my final years of High School, I wanted to learn to write in the Italic style. I bought a fountain pen with an italic nib, and wrote everything in this script, though I was never particularly good at it. I kept it up for a few years but couldn’t really use it to take notes in lectures once I got to university, so gradually I did less and less. My handwriting did become influenced by this style, but I haven’t tried to write in the formal italic style for ages.
+
When I saw this pen, which is part of a range available with italic nibs, I decided to buy one to try and get back to italic script again.
+
The Parson’s Essential is a very solid, old-fashioned style of pen made of what feels like brass with a thick lacquer finish – black in this case. The Italix brand offers a variety of pens like this one, with the choice of many italic nib styles. I think the one I have is fitted with a straight medium italic nib.
+
This one came with a piston-type converter and could take standard European cartridges instead, though I haven’t tried using any.
+
The Italix brand is available from MrPen Limited in the UK. The current price for this pen is under £50.
+
+Picture: The Parson’s Essential with its cap on
+
+Picture: The Parson’s Essential with its cap off
+
+Picture: The Parson’s Essential nib close-up
+
+Picture: The Parson’s Essential disassembled
+
+Picture: The Parson’s Essential writing sample
+
Mechanical Pencils
+
I have always liked using mechanical pencils of various sorts, and am easily tempted to buy into the novel features they can offer!
+
uni-ball Kuru Toga
+
My son saw this pencil during a visit to Japan in 2017 and told me about it. I checked the UK dealers and found they stocked a version of it, and ordered one. It is pleasant to use, and the lead rotates as you write with it to make sure it wears evenly.
+
+Picture: Kuru Toga pencil
+
+Picture: Kuru Toga pencil close-up of tip
+
+Picture: Kuru Toga pencil close-up of eraser
+
There is now a broad range of pencils with this name. Not very many seem to be sold in the UK, but the various online sellers seem to have a number of them. I’m sure that many more are available in Japan!
+
Fountain Pen friendly paper
+
Finding a good quality paper that’s not too expensive, but allows the use of fountain pens without unwanted behaviour was a bit of a challenge. In the past decade or so, more paper types have become available. The various stationers and pen shops sell a variety of paper brands that work well with fountain pens.
+
This time I’m putting forward one brand: Rhodia
+
Rhodia paper
+
The paper sold by Rhodia is very smooth and quite heavy. The weight is usually 80gsm (grams per square metre) or greater. A fountain pen glides well over such paper and the ink from writing does not pass through the paper to the other side. Neither does it show feathering where the writing strokes develop rough edges because the ink has soaked into the fibres of the paper.1
+
Other unpleasant effects such as bleeding and ghosting are reduced by good paper. Bleeding is where the ink passes through the paper and can even mark the next page or the surface under the paper. Ghosting is related and is where the writing can be seen from the back of the paper.
Rhodia is a French company, established in 1934. They make a wide range of paper products, as well as some writing implements such as mechanical pencils.
+
Living as I do in a city with six universities, there are a number of stationery shops that cater to students. They stock Rhodia notebooks and pads and often their prices are quite reasonable. It’s a brand I tend to collect when I get the chance! I like the grid and dot papers. Many of my writing samples in these notes have used Rhodia grid paper.
There is some feathering in the Parson’s Essential writing sample (though it might not be very visible in the picture). For this I used a Pukka Pad, with 80gsm gridded paper. This paper does not deal well with some fountain pens and/or inks, unlike the Rhodia paper, but it’s cheaper!↩
+
+
+
+
+
+
+
diff --git a/eps/hpr3152/hpr3152_Durol_01.png b/eps/hpr3152/hpr3152_Durol_01.png
new file mode 100755
index 0000000..77a634b
Binary files /dev/null and b/eps/hpr3152/hpr3152_Durol_01.png differ
diff --git a/eps/hpr3152/hpr3152_Durol_02.png b/eps/hpr3152/hpr3152_Durol_02.png
new file mode 100755
index 0000000..2b270b0
Binary files /dev/null and b/eps/hpr3152/hpr3152_Durol_02.png differ
diff --git a/eps/hpr3152/hpr3152_Roxon_01.png b/eps/hpr3152/hpr3152_Roxon_01.png
new file mode 100755
index 0000000..5ddfdfa
Binary files /dev/null and b/eps/hpr3152/hpr3152_Roxon_01.png differ
diff --git a/eps/hpr3152/hpr3152_Roxon_02.png b/eps/hpr3152/hpr3152_Roxon_02.png
new file mode 100755
index 0000000..607d256
Binary files /dev/null and b/eps/hpr3152/hpr3152_Roxon_02.png differ
diff --git a/eps/hpr3152/hpr3152_Roxon_03.png b/eps/hpr3152/hpr3152_Roxon_03.png
new file mode 100755
index 0000000..efaec3f
Binary files /dev/null and b/eps/hpr3152/hpr3152_Roxon_03.png differ
diff --git a/eps/hpr3152/hpr3152_Roxon_04.png b/eps/hpr3152/hpr3152_Roxon_04.png
new file mode 100755
index 0000000..614cef5
Binary files /dev/null and b/eps/hpr3152/hpr3152_Roxon_04.png differ
diff --git a/eps/hpr3152/hpr3152_Victorinox_01.png b/eps/hpr3152/hpr3152_Victorinox_01.png
new file mode 100755
index 0000000..e5e3246
Binary files /dev/null and b/eps/hpr3152/hpr3152_Victorinox_01.png differ
diff --git a/eps/hpr3152/hpr3152_Victorinox_02.png b/eps/hpr3152/hpr3152_Victorinox_02.png
new file mode 100755
index 0000000..0fb46f6
Binary files /dev/null and b/eps/hpr3152/hpr3152_Victorinox_02.png differ
diff --git a/eps/hpr3152/hpr3152_Victorinox_03.png b/eps/hpr3152/hpr3152_Victorinox_03.png
new file mode 100755
index 0000000..dcfb862
Binary files /dev/null and b/eps/hpr3152/hpr3152_Victorinox_03.png differ
diff --git a/eps/hpr3152/hpr3152_full_shownotes.html b/eps/hpr3152/hpr3152_full_shownotes.html
new file mode 100755
index 0000000..cc20a3a
--- /dev/null
+++ b/eps/hpr3152/hpr3152_full_shownotes.html
@@ -0,0 +1,122 @@
+
+
+
+
+
+
+
+ My Pocket Knives (HPR Show 3152)
+
+
+
+
+
+
+
+
+
My Pocket Knives (HPR Show 3152)
+
I talk a little about some pocket knives I often carry
+
Dave Morriss
+
+
+
+
+
+
+
Table of Contents
+
+
+
Introduction
+
As a boy I was allowed to have a penknife1 from about the age of 10. Since then I have tended to carry pocket knives with me on a regular basis.
+
I have three knives that often travel with me, though two might have become illegal in the UK in the recent past because they lock.
+
The knives are:
+
+
Victorinox Huntsman
+
Durol locking knife
+
Roxon KS-S501
+
+
Knives
+
Victorinox Huntsman
+
I expect most people know about these knives. They are generally called Swiss Army Knives. They usually have many tools within them.
+
The Victorinox brand is now predominant, as discussed in the Wikipedia article.
+
My knife is the Huntsman model and is classified as a medium-sized knife by Victorinox.
+
+Picture: Victorinox knife, all tools shown
+
+Picture: Victorinox knife, all tools shown
+
+Picture: Victorinox knife, closed with detachable tools
+
I forgot to open the saw when displaying all the accessories, but you can see it on the Victorinox website if you wish!
+
I originally bought one of these when a tool shop in Edinburgh (now long gone) had a sale. I really loved that knife and carried it everywhere, but I lost it in the woods near Dalkeith when out with my kids.
+
After feeling sorry for myself for a while I decided to get another. I bought this one on Amazon where the price was not as good as the original but was not too painful. I’m more careful with this one!
+
Durol knife
+
+Picture: The Durol knife closed
+
+Picture: The Durol knife open
+
Manufactured in Thiers, in central France, known as the knife city, these knives are classics. Thiers is in the Puy-de-Dôme department of Auvergne.
+
From Wikipedia:
+
+
Thiers is a major historical centre of knife manufacturing, with about one hundred companies and a cutlery museum; seventy percent of French pocketknives, kitchen and table knives are manufactured in Thiers.
+
+
This particular knife has a wooden handle (probably ash), with a metal collar holding the rivet on which the blade pivots. The blade is locked closed and open by a mechanism activated by a red button below the collar. The knife design may have been derived from the other classic French pocketknife, the Opinel (mentioned by Shane Shennan on HPR show 2650).
+
This model is the basic one. There are others with different colours, and accessories such as a corkscrew and bottle opener.
+
I bought this knife in France in the 1980’s. My boss and I were driving to Paris2 from Edinburgh for a conference. We stopped at a hypermarché (hypermarket) near Calais to buy something for lunch. I saw and bought this knife there to cut up my baguette and cheese!
+
Roxon S501 knife
+
This is a recent purchase. It’s a single-bladed knife that can be opened with one hand, something I have never owned before. It locks open but there is an easily accessible unlock button and it can be closed with one hand. It has to be admitted that this design has a right-hand bias however.
+
It has a belt/pocket clip on the rear, and from that side a substantial pair of scissors is accessible. These are sharp and cut well.
+
+Picture: The Roxon knife closed
+
+Picture: The Roxon knife blade open
+
+Picture: The Roxon knife rear view closed
+
+Picture: The Roxon knife rear view, scissors open
+
This knife was bought from Amazon UK for £25. It’s made in China according to the box it was in, but it’s difficult to find details on the web. It seems very well made and it is comfortable to hold and use. My only concern is that the shape of the knife edge is a little more difficult to sharpen than other knives.
+
Conclusion
+
If I had to choose one knife to take with me on a trip I’d take the Victorinox for its sheer versatility. In reality, I tend to have all three in close proximity most of the time!
In the audio I mentioned the penknife I had as a boy, saying the handle was covered in pearl. I meant to say mother of pearl.↩︎
+
We shared the driving in my boss’s car. We drove to Dover and took the Hoverspeed hovercraft service to Calais, then drove to Paris from there.↩︎
+
+
+
+
+
+
+
diff --git a/eps/hpr3161/hpr3161_full_shownotes.html b/eps/hpr3161/hpr3161_full_shownotes.html
new file mode 100755
index 0000000..2a81019
--- /dev/null
+++ b/eps/hpr3161/hpr3161_full_shownotes.html
@@ -0,0 +1,202 @@
+
+
+
+
+
+
+
+ How I manage podcast listening (HPR Show 3161)
+
+
+
+
+
+
+
+
+
How I manage podcast listening (HPR Show 3161)
+
Another reply to MrX’s episode on how he listens to podcasts
+
Dave Morriss
+
+
+
+
+
+
+
Table of Contents
+
+
+
Introduction
+
I have spoken in the past about the podcast management system I have created, but have never gone into much detail about how I manage the playing of episodes.
Details of all my podcasts are in a database which runs on my desktop PC
+
+
I keep details from the feed for each episode
+
If I currently hold a relating to an episode file then the database knows the path to it
+
Since I’m a hoarder, I keep episode details in the database for episodes I have already listened to.
+
+
+
+
Every podcast feed is assigned to a group. I have groups such as music, science, documentary and technical.
+
+
+
I interface to the database on my desktop PC using command-line scripts and through Pdmenu menus.
+
The episodes are on a Raspberry Pi which runs all the time and has an SSD attached, and I mount the podcast directory on my desktop using NFS
+
I download podcast episodes overnight on the RPi using a locally hacked version of the venerable Bashpodder, which I have talked about in the past.
+
+
+
I have several MP3 players with Rockbox installed
+
I usually load my players using a script that is aware of the feed groupings that I have defined. It makes a playlist in the database and writes a copy on the player. The database playlist table holds the alias of the player so I can have a playlist per player in it.
+
I have another script which can upload the contents of a feed for when that is convenient.
+
+
+
As I play an episode I run a script that marks that episode in the database as being played.
+
It’s Rockbox that tracks which episode I’m listening to and where I am in the audio on the player
+
+
+
After playing an episode I run a script that lists episodes marked playing and allows me to delete them from my PC and from the database
+
I don’t actually delete anything from the player until I next upload to it
+
+
+
There is an issue with the size of a group and the space on a player. This has worsened recently because I don’t seem to be able to listen to podcasts fast enough and most of my players don’t have a huge amount of space on them.
+
Since I don’t delete files on the player until I upload new ones I don’t always know how much space there will be. I have to find a solution to this!
+
I have a script that shows numbers of files and total sizes for the groups, both for files copied to players and those pending. I plan to use these sizes to make decisions about what gets uploaded.
+
In the worst case I can write episodes to a player selected by feed name if there are capacity issues.
+
+
Uploading feed contents to a player
+
The player being used in this example is a Sansa Clip with an alias of 'Clip'. This player (as with most of my players) runs Rockbox. It gets mounted on /media/usb2. The commands that are run (by the alias player_mount) look on the player for a file called PlayerName and display it using figlet:
Next, the script copy_episodes performs the upload of episodes from a particular feed. The -c C option says to remove any media found in the PODCASTS directory on the player. Using -c N allows further feeds to be appended to what’s already on the player. The feed name is a regular expression, so 'Hacker' maps to Hacker Public Radio. The script generates a playlist on the player in the Playlists directory for use by Rockbox, it’s called podcasts.m3u and contains paths to the episodes which are relative to the player.
+
$ ./copy_episodes -m /media/usb2 -p Clip -c C -f Hacker
+Warning: There are already files on the player
+Deleting media files
+Copying files from feed 'Hacker' to the Clip player
+ 1: /home/cendjm/Bashpodder/Podcasts/2020-08-14/hpr3140.ogg
+ 2: /home/cendjm/Bashpodder/Podcasts/2020-08-17/hpr3141.ogg
+ 3: /home/cendjm/Bashpodder/Podcasts/2020-08-18/hpr3142.ogg
+ 4: /home/cendjm/Bashpodder/Podcasts/2020-08-19/hpr3143.ogg
+ 5: /home/cendjm/Bashpodder/Podcasts/2020-08-20/hpr3144.ogg
+ 6: /home/cendjm/Bashpodder/Podcasts/2020-08-21/hpr3145.ogg
+ 7: /home/cendjm/Bashpodder/Podcasts/2020-08-24/hpr3146.ogg
+Created playlist
+COPY 7
+NOTICE: Rows changed = 7
+$ player_umount
+
There is a backup of the playlist on the PC and the playlists table in the database is also updated. The messages 'COPY 7' and 'NOTICE: Rows changed = 7' are generated by the database for debugging purposes.
+
I use an alternative script (copy_group) when uploading episodes to a player by group.
+
The alias player_umount unmounts the player.
+
The playlist is used on the player by navigating to the 'Playlist Catalogue' entry in the main menu. Clicking the central button on the Sansa Clip shows the list with the name 'podcasts'. Clicking on that shows a numbered list of episodes. Clicking on the first entry starts the playing of the playlist.
+
My Rockbox players are set to write a bookmark for the currently playing episode when the player is turned off. They are also set to auto-resume after stopping, so I can stop and turn off during the playback of an episode and the player will resume where I left off when I turn it on again. If for any reason I navigate away from the playlist I can get back to where I was with the 'Resume Playback' item on the main menu, or I could look in 'Recent Bookmarks' to find the bookmark for what was being played before.
I run several scripts to mark episodes as being played and to delete those I have listened to. The following is an edited transcript.
+
The first section shows what’s active on any players known to the database. The 'Clip' player has the HPR episodes added above, none of which have yet been played. I’m up to podcast episode 38 on the playlist for the 'Zip2' player.
+
1 [Clip,01] HPR: GIMP: Selection Tools
+ 2 [Clip,02] HPR: Lessons learnt from Magic the Gathering game design
+ 3 [Clip,03] HPR: tcsh
+ 4 [Clip,04] HPR: LibreOffice 7.0 Released!
+ 5 [Clip,05] HPR: Pentesting: Insecure Object Reference
+ 6 [Clip,06] HPR: A light bulb moment, part 1
+ 7 [Clip,07] HPR: Help Me Help you with HPR eps!
+
+ 8 [Zip2,38] mintCast 338 - Two Oh Snap
+ 9 [Zip2,39] mintCast 338.5 - The Ripple Effect
+ 10 [Zip2,40] mintCast 339 - OLTs? More Like OLGeez
+ 11 [Zip2,41] mintCast 339.5 - ZFS Butter Recognize
+ 12 [Zip2,42] mintCast 340 - Unit of Measurement
+ 13 [Zip2,43] mintCast 340.5 - Will It Blend?
+ .
+ .
+ .
+Select number(s): 8
+Marking:
+ 8: mintCast 338 - Two Oh Snap
+Made 1 change(s)
+01: [Zip2 ] mintCast 337.5 - Managing the Managers
+02: [Zip2 ] mintCast 338 - Two Oh Snap
+Select numbers: 1
+Deleting:
+ 1: mintCast 337.5 - Managing the Managers
+
I truncated the complete list on 'Zip2'. The prompt 'Select number(s):' is asking which of these episodes I’m next going to listen to. I select 8, which is a mintCast episode on the 'Zip2' player.
+
The next prompt 'Select numbers:' follows a list of episodes currently marked as being played. I select number 1, the previous mintCast episode I just finished. This is then deleted and the database updated.
+
The listing then shows how many podcast files still exist, with their cumulative size. This is followed by a (truncated) list of durations of the remaining episodes. I currently have 1 week, 1 day, 47 minutes and 58 seconds of continuous listening to catch up with my 10GB of podcasts!
+
Total = 262 (83 directories)
+10G Podcasts/
+
+[2020-08-24 09:00:53] 01 weeks, 1 days, 01 hours, 57 minutes, 02 seconds
+[2020-08-24 11:02:42] 01 weeks, 1 days, 00 hours, 47 minutes, 58 seconds
+--------------------------------------------------------------------------------
+ Player: Zip2
+ Album: MP3 - mintCast
+ Artist: mintCast
+ Title: mintCast 338 - Two Oh Snap
+ Length: 01:13:31
+ Genre: Podcast
+ Track: 338
+ Year: 2020
+Comment: The podcast by the Linux Mint community for all users of Linux.
+ Path: /home/cendjm/Bashpodder/Podcasts/2020-07-02/ep338.mp3
+--------------------------------------------------------------------------------
+
Finally a display script is run against all episodes marked as playing to show a summary of what is being listened to on my players.
+
I usually only have a maximum of three players active at any time. That way, I can change to another player when the one I’m listening to needs its battery recharged.
+
Managing ID3 and similar tags on episodes
+
As I process the day’s batch of incoming podcasts I manipulate the audio metadata (tags). I have a Perl script I wrote which uses a rules file for each feed and performs checks and actions on the episodes it finds.
+
In general I try to ensure that podcast episodes have a valid title that can be seen in the lists shown above. Sometimes the audio does not contain tags, so my script can access the fields from the RSS or Atom feed and use these to fill in the audio tags.
+
Some feeds place the feed name in the title (as with the mintCast example above), but if not, I use rules to add something meaningful. In the case of HPR the title usually makes no reference to HPR itself, so I add the characters 'HPR: ' to the front of the title for my convenience.
+
I find it surprising how many feeds produce episodes which have no metadata at all. I used to write to feed owners and ask for these to be filled in but this was largely a waste of time, so I evolved my tag_manager script to make decisions about what should be in the metadata fields.
+
+
diff --git a/eps/hpr3197/hpr3197_Bleu_Pervenche_01.png b/eps/hpr3197/hpr3197_Bleu_Pervenche_01.png
new file mode 100755
index 0000000..aff76bc
Binary files /dev/null and b/eps/hpr3197/hpr3197_Bleu_Pervenche_01.png differ
diff --git a/eps/hpr3197/hpr3197_Diamine_inks_01.png b/eps/hpr3197/hpr3197_Diamine_inks_01.png
new file mode 100755
index 0000000..7a3606a
Binary files /dev/null and b/eps/hpr3197/hpr3197_Diamine_inks_01.png differ
diff --git a/eps/hpr3197/hpr3197_Jinhao_500_01.png b/eps/hpr3197/hpr3197_Jinhao_500_01.png
new file mode 100755
index 0000000..d6d3fce
Binary files /dev/null and b/eps/hpr3197/hpr3197_Jinhao_500_01.png differ
diff --git a/eps/hpr3197/hpr3197_Jinhao_500_02.png b/eps/hpr3197/hpr3197_Jinhao_500_02.png
new file mode 100755
index 0000000..4d74e7e
Binary files /dev/null and b/eps/hpr3197/hpr3197_Jinhao_500_02.png differ
diff --git a/eps/hpr3197/hpr3197_Jinhao_500_03.png b/eps/hpr3197/hpr3197_Jinhao_500_03.png
new file mode 100755
index 0000000..4f6f7c8
Binary files /dev/null and b/eps/hpr3197/hpr3197_Jinhao_500_03.png differ
diff --git a/eps/hpr3197/hpr3197_Jinhao_500_writing_01.png b/eps/hpr3197/hpr3197_Jinhao_500_writing_01.png
new file mode 100755
index 0000000..e07962c
Binary files /dev/null and b/eps/hpr3197/hpr3197_Jinhao_500_writing_01.png differ
diff --git a/eps/hpr3197/hpr3197_Jinhao_X450_01.png b/eps/hpr3197/hpr3197_Jinhao_X450_01.png
new file mode 100755
index 0000000..47b74c1
Binary files /dev/null and b/eps/hpr3197/hpr3197_Jinhao_X450_01.png differ
diff --git a/eps/hpr3197/hpr3197_Jinhao_X450_02.png b/eps/hpr3197/hpr3197_Jinhao_X450_02.png
new file mode 100755
index 0000000..3283c02
Binary files /dev/null and b/eps/hpr3197/hpr3197_Jinhao_X450_02.png differ
diff --git a/eps/hpr3197/hpr3197_Jinhao_X450_03.png b/eps/hpr3197/hpr3197_Jinhao_X450_03.png
new file mode 100755
index 0000000..ece6872
Binary files /dev/null and b/eps/hpr3197/hpr3197_Jinhao_X450_03.png differ
diff --git a/eps/hpr3197/hpr3197_Jinhao_X450_and_500_01.png b/eps/hpr3197/hpr3197_Jinhao_X450_and_500_01.png
new file mode 100755
index 0000000..7337c4c
Binary files /dev/null and b/eps/hpr3197/hpr3197_Jinhao_X450_and_500_01.png differ
diff --git a/eps/hpr3197/hpr3197_Jinhao_X450_writing_01.png b/eps/hpr3197/hpr3197_Jinhao_X450_writing_01.png
new file mode 100755
index 0000000..89d99c7
Binary files /dev/null and b/eps/hpr3197/hpr3197_Jinhao_X450_writing_01.png differ
diff --git a/eps/hpr3197/hpr3197_Oxford_pads_Clairfontaine_notebook_01.png b/eps/hpr3197/hpr3197_Oxford_pads_Clairfontaine_notebook_01.png
new file mode 100755
index 0000000..6621662
Binary files /dev/null and b/eps/hpr3197/hpr3197_Oxford_pads_Clairfontaine_notebook_01.png differ
diff --git a/eps/hpr3197/hpr3197_Oxford_pads_Clairfontaine_notebook_02.png b/eps/hpr3197/hpr3197_Oxford_pads_Clairfontaine_notebook_02.png
new file mode 100755
index 0000000..8a47ef7
Binary files /dev/null and b/eps/hpr3197/hpr3197_Oxford_pads_Clairfontaine_notebook_02.png differ
diff --git a/eps/hpr3197/hpr3197_Pentel_Graph_Gear_1000_01.png b/eps/hpr3197/hpr3197_Pentel_Graph_Gear_1000_01.png
new file mode 100755
index 0000000..98233d9
Binary files /dev/null and b/eps/hpr3197/hpr3197_Pentel_Graph_Gear_1000_01.png differ
diff --git a/eps/hpr3197/hpr3197_Pentel_Graph_Gear_1000_02.png b/eps/hpr3197/hpr3197_Pentel_Graph_Gear_1000_02.png
new file mode 100755
index 0000000..7776d3e
Binary files /dev/null and b/eps/hpr3197/hpr3197_Pentel_Graph_Gear_1000_02.png differ
diff --git a/eps/hpr3197/hpr3197_Pentel_Graph_Gear_1000_03.png b/eps/hpr3197/hpr3197_Pentel_Graph_Gear_1000_03.png
new file mode 100755
index 0000000..75955f7
Binary files /dev/null and b/eps/hpr3197/hpr3197_Pentel_Graph_Gear_1000_03.png differ
diff --git a/eps/hpr3197/hpr3197_Pentel_Graph_Gear_1000_04.png b/eps/hpr3197/hpr3197_Pentel_Graph_Gear_1000_04.png
new file mode 100755
index 0000000..3732467
Binary files /dev/null and b/eps/hpr3197/hpr3197_Pentel_Graph_Gear_1000_04.png differ
diff --git a/eps/hpr3197/hpr3197_Zebra_Sarasa_Clip_01.png b/eps/hpr3197/hpr3197_Zebra_Sarasa_Clip_01.png
new file mode 100755
index 0000000..44d8c29
Binary files /dev/null and b/eps/hpr3197/hpr3197_Zebra_Sarasa_Clip_01.png differ
diff --git a/eps/hpr3197/hpr3197_Zebra_Sarasa_Clip_02.png b/eps/hpr3197/hpr3197_Zebra_Sarasa_Clip_02.png
new file mode 100755
index 0000000..8ab959b
Binary files /dev/null and b/eps/hpr3197/hpr3197_Zebra_Sarasa_Clip_02.png differ
diff --git a/eps/hpr3197/hpr3197_full_shownotes.html b/eps/hpr3197/hpr3197_full_shownotes.html
new file mode 100755
index 0000000..835e0ea
--- /dev/null
+++ b/eps/hpr3197/hpr3197_full_shownotes.html
@@ -0,0 +1,244 @@
+
+
+
+
+
+
+
+ Pens, pencils, paper and ink - 3 (HPR Show 3197)
+
+
+
+
+
+
+
+
+
Pens, pencils, paper and ink - 3 (HPR Show 3197)
+
Looking at another batch of writing equipment
+
Dave Morriss
+
+
+
+
+
+
+
Table of Contents
+
+
+
Introduction
+
This is the third in a short series about pens, pencils, writing paper and ink.
+
In this episode I look at two Chinese fountain pens, a mechanical pencil, a gel pen, some inks and some paper.
+
Fountain Pens
+
Jinhao range
+
A few years ago I was tempted by the Jinhao range of pens from China. These are (to my mind) quite good-looking pens, usually quite solidly made with an attractive finish, which are quite low-priced.
+
I bought two: the Jinhao X450 (eBay in 2013, £5.28) and the Jinhao 500 (eBay in 2016, $8.99 USD). I think the 500 has been discontinued, but the X450 is available from many pen sellers.
+
+Picture: The Jinhao X450 and 500 with their caps on
+
+Picture: The Jinhao X450 with its cap off
+
+Picture: The Jinhao X450 nib close-up
+
+Picture: The Jinhao X450 disassembled
+
+Picture: The Jinhao X450 writing sample
+
+Picture: The Jinhao 500 with its cap off
+
+Picture: The Jinhao 500 nib close-up
+
+Picture: The Jinhao 500 disassembled
+
+Picture: The Jinhao 500 writing sample
+
Both are heavy pens which feel as if the barrel and cap are made of metal. Both have a colourful finish which is described as celluloid in some sources and have gold-like embellishments (slightly tacky for my tastes, but acceptable). The caps on both are push-fit, though they seem quite secure.
+
The cap on the X450 cannot be posted easily. It falls off unless pushed very securely onto the end of the barrel. It’s heavy and feels uncomfortable to me.
+
The 500 cap can be posted but the balance seems wrong as far as I am concerned, so I tend not use it that way.
+
Both pens use converters which were included. I have not tried either of them with cartridges, but I believe they take the international standard size.
+
Both write smoothly and quite pleasantly, though the X450 nib seems very large! The X450 has a shaped grip with indentations which seem to be there to guide you to hold the pen properly (similar to the LAMY range discussed in an earlier show). The 500 has no such shaping of the grip, which I prefer. Both pens have steel nibs of a medium size. Both feel like they would be called fine on a European pen.
+
These pens suffer from drying out when left unused for a time, even with the cap on. The impression I get is that the caps do not seal very well. This makes me avoid using them, and makes me reluctant to carry them about with ink in them for fear of leakage.
+
As I was writing this section I was going to say that I would not recommend either of these pens. However, I inked up the 500 and used it for a few days and I have to say I came to enjoy it more than I did when I first bought it. I even used it posted for a while and gradually got more used to the weight of it being more towards the back of the pen. The nib is smooth and ink flow is good (so long as it hasn’t had time to dry out significantly).
+
I keep a journal (or Commonplace Book) for thoughts, observations and general jottings, as well as for my writing practice to ensure I use a fountain pen regularly. The Jinhao 500 was good to use for this purpose.
+
The X450 had been left with ink in it, and had dried out, which you can see if you look at the pictures very closely. I cleaned it out and re-inked it. I have written with it a little, but my overall conclusion is that this is not my favourite pen. It’s too big, and the nib is not fine enough. I will continue using it for a while to see if my opinion changes.
+
So, to conclude, if you have a hankering for a chunky old-fashioned fountain pen at a very reasonable price I wouldn’t dissuade you from buying either of these. Don’t leave them for more than a few days with ink in though!
+
Mechanical Pencils
+
I have a number of these, which I use regularly for jottings, making lists, etc. I do a little wood working from time to time though I tend to use a traditional pencil when marking wood for cutting and similar. The mechanical pencils are used mainly for writing and sketching. I am concentrating on my latest purchase here.
+
Pentel Graph Gear 1000
+
This is a very robust mechanical pencil made of metal with a knurled grip area which has translucent rubber inserts to stop it sliding through your fingers. It’s quite heavy so might not be to all tastes.
+
+Picture: The Pentel Graph Gear 1000 - ready to write
+
Pressing the button on the top causes the lead and a surrounding sleeve to extend. Pressing the top part of the pocket clip or just clipping the pencil in a shirt or jacket pocket makes the lead and sleeve retract.
+
+Picture: The Pentel Graph Gear 1000 - retracted
+
As with most mechanical pencils, removing the cap reveals an eraser. Removing that allows more 0.5mm leads to be added.
+
+Picture: The Pentel Graph Gear 1000 - cap removed
+
The pencil is very popular with engineers and woodworkers since it is strong enough to survive use in a workshop quite well.
+
Personally, I’m not keen to use such a pencil in the woodworking context at the moment because I feel I’d ruin it. Once I have a better workshop I may think again.
+
I bought a set of three of these pencils from Amazon for £19.99 in February 2020. I have kept one for myself and have passed the other two to my children.
+
There are other sizes of this pencil available as well as the 0.5mm I bought: 0.3 mm, 0.4 mm, 0.7 mm, and 0.9 mm. The size is marked on the barrel and colour coded in the writing on the pencil, the grip, and other places.
+
There is a rotatable indicator on the pencil barrel which can be set to show the type of lead being used. I use HB lead for mine, but other options are available.
+
+Picture: The Pentel Graph Gear 1000 - using HB leads
+
I was slightly put off by the weight of this pencil to begin with, but after a few weeks of use I have grown to like it very much.
+
One minor downside as far as I am concerned is the lack of a thing to clean out the section where the lead protrudes for writing. Mechanical pencils I have owned in the past have had a very fine (0.5mm presumably) piece of metal inserted into the base of the eraser. This can be used to clean out the lead channel should there be problems.
+
I actually dropped my pencil on a concrete floor straight onto the protruding lead. It didn’t damage the pencil but the lead was shattered and plugged up this channel. I didn’t have anything fine enough to clear out the pieces - a process that can take a bit of work and the ability to apply pressure. A bit of fuse wire would have done the job but my house has circuit breakers! I got there in the end and the pencil works fine again.
+
Gel Pens
+
Zebra Sarasa Clip
+
When I’m not using a fountain pen or a mechanical pencil I’m likely to write with a gel pen.
+
+Picture: The Zebra Sarasa Clip
+
In late 2019 I bought one of these Zebra pens, after having heard them recommended. The one I have is black with a 0.5mm tip. The point is retractable and there is a strong spring-loaded clip that holds fast to a pocket or to stationery. There are many other colours available.
+
+Picture: The Zebra Sarasa Clip - showing the rubber grip and 0.5mm tip
+
It’s a comfortable pen to use. It has a rubber grip and is quite chunky; suitable for larger hands. The water-based gel ink dries quickly and is moderately water resistant.
+
You might be able to see from the ink level that I have used up about half of the original quantity, so It’s certainly a pen I turn to a fair bit. A slight downside, though not a surprising one, is that there are no refills. The pen needs to be replaced when empty.
+
Inks
+
All of the inks I own are of the simpler type – coloured and water soluble.
+
The fountain pen (and dip pen) ink ranges include inks with tiny particles in them to make the result shimmer, inks with scents, and permanent inks. I have not yet found the need to use any of these!
+
J. Herbin Bleu Pervenche (Periwinkle Blue) ink
+
J. Herbin is a French company that produces pens, ink, sealing wax and other stationery supplies. It is an old company; see the website for some historical information about it.
+
+Picture: J. Herbin Bleu Pervenche ink (box)
+
My favourite ink is the Bleu Pervenche (Periwinkle Blue), though I also have some brown cartridges called Terre de feu (Tierra Del Fuego or Land of Fire).
+
There are many inks to choose from, including some with glitter in them and others which are scented.
+
Diamine inks
+
Diamine Inks have been producing inks since 1864, and are based in Liverpool in the UK. They sell fountain pen ink in cartridges as well as in 30ml, 40ml, 50ml and 80ml bottles.
+
+Picture: Diamine Inks 30ml and 80ml
+
Since the 80ml bottles are moderately expensive (though they last quite a long time for me), I often buy the 30ml bottles to test out the colours. Then if I really like them I’ll buy a larger bottle.
+
In case you are interested, in the 30ml range I have:
+
+
Ancient Copper
+
Apple Glory
+
Autumn Oak
+
China Blue
+
Chocolate Brown
+
Eau de Nil
+
Tyrian Purple
+
Ultra Green
+
Violet
+
+
In the 80ml range I have:
+
+
ASA Blue
+
Bilberry
+
Damson
+
Onyx Black
+
Sherwood Green
+
+
You can see demonstrations of these colours on the Diamine website.
+
Paper for fountain pens
+
I already mentioned Rhodia paper in the last show of this group.
+
There are other brands that make a point of stating that their paper products are fountain pen friendly.
+
Clairefontaine
+
Clairefontaine is a French company with its main site at Etival-Clairefontaine, located 90km from Strasbourg, along the Meurthe river. The Clairefontaine mill has been making paper since 1858 and other stationery products since 1890.
+
Currently I have one Clairefontaine notebook, with the strange name Age Bag. It’s A4 size, contains ruled paper simply stapled into a card cover which has a leather-like pattern on it. I bought it to try it out. The paper is great for ink and the notebook opens flat – an important consideration in my case.
+
+Picture: Clairefontaine notebook detail
+
Oxford Stationery
+
Oxford Black n’ Red Casebound Notebook
+
I bought a number of these from Amazon when the price was £5 each several years ago. They are A4 size and have 192 ruled pages. The paper used is white 90 gsm Optik, and there is a hard cover with a sewn-in spine and a ribbon marker.
+
The paper in these notebooks is excellent for fountain pens in my experience. The only downside as far as I am concerned is that there is only one type of line spacing, and it is rather wide for my writing.
+
I was going to add a photograph of these notebooks, but a recent spate of tidying seems to have made them temporarily unavailable!
+
Oxford Campus Refill Pad
+
When my children were at school I used to look out for these prior to the start of the school year when they were cheap and plentiful in the supermarkets. I amassed a stock of them – somewhat more than were needed! I use these with my fountain pens since they also have 90 gsm Optik paper which is very friendly to this type of pen.
+
These pads are also wide ruled, which is a slight disadvantage, in my opinion.
+
+Picture: Oxford and Clairefontaine Stationery
+
Epilogue
+
I don’t have a great deal more to talk about on this subject at the moment. If I think of any other topics or find any more pens, pencils and inks I think might be of interest I will do another show.
+
If you are at all interested in the subject and have your own collection of stationery items please do a show about them!
Some scripts and a database for randomly choosing which meal to cook
+
Dave Morriss
+
+
+
+
+
+
+
Table of Contents
+
+
+
Overview
+
I live on my own, but I cook for members of my family from time to time. Each week we all get together and cook dinner for Wednesday and Thursday. I usually do the cooking but we are starting to share these duties for certain meals.
+
In 2019 I thought it would be useful if I had some sort of random chooser to decide what next week’s meal was going to be. I wrote a Bash script called choose_meal, using a simple CSV file of meal names and the date last eaten to avoid choosing the same one too often. The shortcomings of this approach soon became apparent!
+
It wasn’t long before choose_meal was rewritten in Perl. This time I decided to use a database, and chose SQLite to create it. My database contained just two tables, one for the meals themselves (called slightly confusingly 'meal_history'), and another for a record of the choices made (called 'meal_log') – the ability to produce historical reports seemed like a desirable feature!
+
In 2019 the design of this system was very specific to our needs: one choice per week on a Wednesday. It was not something that could be used by anyone else – which seemed like a bad idea.
+
In late 2020 and early 2021 the system was redesigned, as will be discussed in the detailed notes. In May 2021 a more general design was added to the public GitLab repository and the preparation of this show was begun.
+
I had never intended this system to hold recipes. This was partly because I have built a collection of recipes I have constructed from various sources and amended as I have made them. I print these and keep them in a ring-binder for reference as I cook. In some cases the meals described in the database are multi-component ones (such as the dishes that make up a curry for example), so it doesn’t seem appropriate to hold these here.
+
I might rethink this in the future however.
+
Database
+
Note: I was a bit confused by the names and usages of these tables when recording the audio. I guess this goes to show that my name choices were bad!
+
Data stored per meal
+
The overall design of the database became a little more complicated as time went on. The data held for a given meal became:
+
+
+meal_history
+
+
+
+Item
+
+
+Description
+
+
+
+
+Id
+
+
+A unique numeric key for this meal
+
+
+
+
+Name
+
+
+The unique name for the meal
+
+
+
+
+Minimum delay
+
+
+The minimum number of days between occurrences of the meal
+
+
+
+
+Last eaten
+
+
+The date this meal was last eaten (might be blank if it’s a new addition)
+
+
+
+
+Enabled
+
+
+A yes/no setting (0/1 actually) indicating whether the meal is available for choice. We sometimes give certain meals a “rest” for example.
+
+
+
+
+Notes
+
+
+General notes about the meal. Contents up to the user.
+
+
+
+
The delay setting was added to prevent the same meal being chosen repeatedly week after week; to ensure reasonable variety.
+
Having the capability to disable a meal entry was useful, perhaps because we were bored with it, or because it was seasonal. It’s also been a useful to way to add a placeholder where we want to try a particular type of meal but haven’t yet hit on the best recipe.
+
The notes tend to be used to suggest amendments to a recipe, or to record feedback on a particular choice.
+
Meal log
+
As mentioned earlier, keeping history of previous choices is quite interesting (to me anyway). The database holds a log table which is written to every time a choice is made. This means we can compute how many times a particular meal has been chosen and can look back to see what was chosen when. It’s by no means vital to the functioning of the system though.
+
The main items stored in the table are:
+
+
+meal_log
+
+
+
+Item
+
+
+Description
+
+
+
+
+Id
+
+
+A unique numeric key for this log entry
+
+
+
+
+Meal id
+
+
+Link to the meal table
+
+
+
+
+Date of entry
+
+
+The date the log entry was written
+
+
+
+
+Minimum delay
+
+
+The minimum number of days at the time the meal was chosen
+
+
+
+
+Previously eaten
+
+
+The date this meal was previously eaten
+
+
+
+
+Last eaten
+
+
+The date this meal was last eaten (might be blank if it’s a new addition)
+
+
+
+
For the record, the log table is written to using a trigger, a database feature that performs a task when something changes in a table.
+
Scripts
+
There are two scripts as part of this system:
+
+
choose_meal – makes a random choice of a single meal from the database
+
manage_meal – allows management of meal entries within the database and can generate reports on the state of the database
+
+
The scripts are both written in Perl and contain embedded documentation. Calling either of these scripts with the option -help will give a brief summary of how to use it. More in depth information can be obtained by using the -manpage option. Alternatively the documentation in these scripts can be viewed by typing the following in the directory where the scripts are held:
+
perldoc choose_meal
+OR
+perldoc manage_meal
+
The scripts and various other files are to be found in a Git repository on GitLab. It is intended that this could be cloned to the system on which it is to be run and then installed in a directory of choice. The repository contains documentation on how this can be done, and there is an installation section in these notes.
+
Script overviews
+
To be prepared for use, a database needs to be created and populated with some meals. The number of these depends on how often you plan to choose meals and what delay you set for the meals. You want to ensure that there are meals eligible to be chosen whenever you make a random choice!
+
manage_meal
+
The database can be populated with manage_meal using the -add option.
+
Meals can be added in a disabled state and can be enabled (made available for choice) once you are ready to use the database.
+
You can see what’s in the database by using the -summary option. Individual meals can be examined with the -list option and the notes viewed by adding -full to that.
+
The repository contains further examples of the use of this script as well as details of the options it accepts.
+
choose_meal
+
With the database populated this script can be run to make a random choice among the meals. I tend to run choose_meal with the -verbose and -dry-run options initially, because that makes it report what meals it is choosing from, but does not save the choice.
+
By default choose_meal makes a choice for the current date, but can be used to choose for a future date by using the -date=DATE option, as in:
+
./choose_meal -date=2021-07-19
+
If you want to make a choice for a future day of the week, like next Friday then it’s possible to use a command substitution containing a call to the GNU date command, as in:
+
./choose_meal -date=$(date -d 'next Friday' +%F)
+
An alternative, when the same day is required each week, is to set up the configuration file (see below).
+
If a date is provided then it overrides the configuration file setting.
+
Configuration file
+
This file is called .choose_meal.yml by default and is expected to be in the same directory as the script. Alternative names may be used but need to be provided with the -config=FILE option.
+
The file is written in YAML format, so the first line must consist of three dashes (hyphens). The weekday setting defines the name of the day of the week that will be selected, and may be provided with a language specification if desired. The word 'weekday' must begin in column 1 and end with a colon (':'). The keywords 'name' and 'language' must be indented by two spaces, and each must end with a colon followed by the day name or language name respectively:
+
---
+weekday:
+ name: Wednesday
+ language: English
+
The repository has a file called .choose_meal.yml_example as an example of how to prepare this file.
+
Note that YAML is related to JSON in structure. If it helps to understand the structure, the example YAML shown above can be represented in JSON form as follows:
You can keep your version of the software up to date with the repository, if it changes, by running the following git command from the same directory:
+
$ cd ~/Git/weekly_menus
+$ git pull
+
You could use the clone of the repository to hold your database, but I don’t recommend it. I made another top-level directory called Weekly_menus and copied the relevant files into it:
The scripts will require a number of Perl modules which you will have to install if you don’t already have them. I used a Perl tool called App::cpanminus to do this which I installed as follows. On the Pi using user pi I wasn’t prompted for a password when using sudo, but you may be. The password required will be your login password. This assumes you have curl installed, which was the case for me:
Having acquired cpanminus (the command you need, provided by the module) you can collect the remaining Perl modules as follows:
+
$ cd Weekly_menus
+$ cat modules_needed | xargs cpanm -S
+
This uses the file modules_needed, which is part of the repository. It contains a list of the required modules, and these are fed into xargs which sends them to cpanminus running in sudo mode. This can take a while to complete since many of the modules have dependencies. Testing this on a Raspberry Pi, 77 modules were installed in total.
+
Running this at a later date will ensure that all modules are current and will usually take far less time to run.
+
Finally, you need to create the database, which can be achieved as shown below. It causes sqlite3 to create a database file and populate it using the SQL in meals_db.sql:
+
$ sqlite3 meals.db < meals_db.sql
+
Managing the database
+
Use manage_meal to add meals to the database.
+
$ ./manage_meal -add
+Enter details of the new meal:
+Name: Haggis, Neeps & Tatties
+Minimum delay: 28
+Enabled: y
+Last eaten:
+Notes: McSween's Vegetarian Haggis is pretty good
+Added new meal: 'Haggis, Neeps & Tatties'
+
In the current release meals can be added and edited, enabled and disabled, but not deleted. This is because the log might contain references to older meals and deleting them would break the history.
+
If this seems to be an issue it may be possible to rethink this in future.
+
Making backups
+
It’s a good idea to make backups of the database in case you delete it or mess it up in some way. Included in this system is a script to be run out of cron called cronjob_bup. This needs to be set up to be called at some particular time of day. Use the command crontab -e to edit your cron settings. I run it at 21:55 nightly with the following line in the crontab file:
+
55 21 * * * $HOME/Weekly_menus/cronjob_bup
+
+
+
This specifies the path to the script and tells cron to run it daily at 21:55.
+
The cronjob_bup script will create a directory ~/Weekly_menus/backup and will dump meals.db into it with a name like: meals_db_20210702_215501.sql.bz2. This is a SQL file which is compressed with the bzip2 utility. It can be used to restore the database with a command such as the following:
This will restore the database structure and data.
+
The cronjob_bup script will delete backups older than 140 days (approximately 5 months).
+
Making backups to a sub-directory is better than nothing, but if the disk is lost then everything will be lost! I actually run a daily backup task on my main workstation which writes changed files to an external disk. This is a bit better than just saving a local copy, but not good for really vital stuff.
+
What’s next for this system?
+
We use this system with the database and associated scripts on a weekly basis. It does all that we want it to do at present. I say “we” but I am the sole user (and developer) at the moment.
+
Perhaps speaking about it on HPR and releasing it to the world via GitLab will be of use to others. If so, I will be pleased and will welcome feedback.
+
Possible ‘To Do’ list
+
+
Add a means of including or linking to recipes
+
Tidy up option processing (which is a bit messy at present)
+
+
diff --git a/eps/hpr3413/hpr3413_coproc_test.awk b/eps/hpr3413/hpr3413_coproc_test.awk
new file mode 100755
index 0000000..357b90a
--- /dev/null
+++ b/eps/hpr3413/hpr3413_coproc_test.awk
@@ -0,0 +1,27 @@
+#!/usr/bin/awk -f
+
+BEGIN {
+ # Turn off buffering with bash
+ coproc = "stdbuf -i0 -o0 -e0 bash"
+
+ i = 0
+
+ # Commands we'll send
+ com[i++] = "date"
+ com[i++] = "whoami"
+ com[i++] = "id"
+ com[i++] = "exit"
+
+ i = 0
+
+ # Write and read in a loop → buffering problems?
+ do {
+ print com[i++] |& coproc
+ coproc |& getline results
+ if (i in com) print ":", results
+ } while (i in com)
+ close(coproc)
+
+}
+
+# vim: syntax=awk:ts=8:sw=4:ai:et:tw=78:fo=tcrqn21
diff --git a/eps/hpr3413/hpr3413_coproc_test.sh b/eps/hpr3413/hpr3413_coproc_test.sh
new file mode 100755
index 0000000..ac088f7
--- /dev/null
+++ b/eps/hpr3413/hpr3413_coproc_test.sh
@@ -0,0 +1,53 @@
+#!/bin/bash -
+
+#
+# Use bash in the coprocess but turn off buffering
+#
+process='stdbuf -i0 -o0 -e0 bash'
+
+#
+# Indexed array of bash commands
+#
+declare -a com=('date +%F' 'whoami' 'id' 'echo "$BASH_VERSION"'
+ 'printf "Hello\nWorld\n"')
+
+#
+# Count commands in the array
+#
+n="${#com[@]}"
+
+#
+# Start the coprocess
+#
+coproc child { $process; }
+
+#
+# Loop though the commands
+#
+i=0
+while [[ $i -lt $n ]]; do
+ # Show the command
+ echo "\$ ${com[$i]}"
+
+ # Send to coprocess
+ echo "${com[$i]}" >&"${child[1]}"
+
+ # Read a line from the coprocess
+ read -u "${child[0]}" -r results
+
+ # Show the line received
+ echo "$results"
+
+ ((i++))
+done
+
+#
+# Send an EOF to the coprocess (if needed)
+#
+if [[ -v child_PID ]]; then
+ echo "-- End --"
+ exec {child[1]}>&-
+
+ # Flush any remaining results
+ cat <&"${child[0]}"
+fi
diff --git a/eps/hpr3413/hpr3413_full_shownotes.html b/eps/hpr3413/hpr3413_full_shownotes.html
new file mode 100755
index 0000000..6ed1540
--- /dev/null
+++ b/eps/hpr3413/hpr3413_full_shownotes.html
@@ -0,0 +1,395 @@
+
+
+
+
+
+
+
+ Bash snippet - using coproc with SQLite (HPR Show 3413)
+
+
+
+
+
+
+
+
+
Bash snippet - using coproc with SQLite (HPR Show 3413)
+
Sending multiple queries to a running instance of sqlite3
+
Dave Morriss
+
+
+
+
+
+
+
Table of Contents
+
+
+
Introduction
+
I am in the process of rewriting some scripts I use to manage Magnatune albums. I’m a lifetime Magnatune member and have access to the whole music collection. I wrote a script for downloading albums and placing them in my ~/Music directory which I talked about in 2013 (show 1204). The original scripts are still available on GitLab and I know of one other person who made use of them!
+
Since 2013 I have written a few other support scripts, for example one to manage a queue of albums I want to buy and download, and one which summarises the state of this queue.
+
It’s this 'show_queue' script I am currently updating (called show_queue_orig, and available in the resources to this show). The original version of this script took Magnatune album URLs from a file (acting as a queue of stuff I wanted to buy), parsed out a piece of the URL and used it to grep a pre-prepared summary in another file. This file of summaries had been made from a master XML file provided by Magnatune (see update_albums on GitLab).
+
Magnatune has moved away from this master XML file to a SQLite database in recent years, so I want to perform a database lookup for each URL to list its details.
+
The first version of the new script wasn’t difficult to write: just extract the search data as before and run a query on the database using this data. I have included this script which I call show_queue_db_1 amongst the resources for this episode, so you can see what I’m talking about – and what I want to improve on. It felt bad to be performing multiple calls on the sqlite3 command in a loop, so I looked around for an alternative way.
+
In April 2019 clacke did a show (number 2793) about the Bash coproc command.
+
This command creates a subshell running a command or group of commands which is connected to the calling (parent) process through two file descriptors (FDs). It’s possible for the calling shell to write to the input descriptor and read from the output one and thereby communicate with whatever is running in the subshell.
+
I was vaguely aware of coproc at the time of clacke’s show but hadn’t looked into it. I found the show fascinating but didn’t have a use for the feature at the time.
+
To solve my need to show my Magnatune queue of future purchases, it looked as if a sqlite3 instance running in a subshell could be given queries one after the other and return the answers I needed. My journey to a Bash script using coproc then followed.
+
Details
+
My ‘pending’ queue
+
As I said, I have a file containing Magnatune URLs of the form:
The final component is the 'SKU', which is the key to the album in the system. In the original XML-based system I see the following example information when I run my current script:
+
Artist: Antiqcool
+Album: Original Instrumental Acoustic Guitar Songs Vol 1
+Genres: Alt Rock,Folk-Rock,Instrumental Rock
+Code: antiqcool-acousticguitar
+----
+
I store the URLs because that’s the form Magnatune uses to send album details in their periodic email messages about new albums. It’s easier to cut and paste them.
+
The original show_queue script just reads this queue file in a loop and looks up the SKU in a file of reformatted XML information. As mentioned, I have included this script for reference as one of the resources accompanying this show (show_queue_orig).
+
More about coproc
+
In clacke’s HPR show 2793 he described the coproc command in some detail, with examples of how it behaves.
+
It is documented in a fairly terse fashion in the Bash Reference Manual (see link below).
+
The coproc command
+
In essence coproc runs a command as a subshell (or coprocess) with a two-way pipe connected to it. This allows the shell which generated it to write to and read from the coprocess.
+
The command format is:
+
coproc [NAME] command [redirections]
+
The syntax is a little strange to my way of thinking. The use of the name depends on the type of command. If it’s a simple command then no name can be provided, and the default of COPROC is used so you can only run one coprocess at a time. The alternative to a simple command is a command group enclosed in braces or parentheses, and the user-supplied name is used in this case, otherwise, if there’s no name COPROC is used as before.
+
The relevance of the name is that it is used to create variables relating to the coprocess. There’s a variable ‘name_PID’ which holds the process ID number of the subshell (coprocess), and an array.
+
The two-way pipe for communicating with the coprocess is connected to two file descriptors in the executing shell. These are stored in an indexed array called name. Element zero contains the descriptor for the standard output of the coprocess, and element 1 the descriptor for standard input.
+
Note: I haven’t talked about Bash file descriptors in my Bash scripting shows but plan to do so before long.
+
Simple usage
+
Here’s an example of running the date command in a coprocess. Not a lot of point in doing this of course, but it might help explain things reasonably well:
The coproc command is followed by the command to run - 'date'. Then, after a semicolon the output from the coprocess is displayed with cat. We do this on the same line because the coprocess will finish very quickly and will delete the COPROC array, making it impossible to see the output.
+
The first line beginning '[1]' shows the PID of the coprocess (an examination of the process hierarchy should show this to be a subprocess of the calling process)
+
The next line shows the date returned from the coprocess via cat
+
The second line beginning '[1]' shows that the coprocess has finished.
+
+
If for any reason you have a coprocess that continues to run unexpectedly you can look for it with the Bash 'jobs' command – this is job 1 in the above case, as shown by the '[1]' followed by the PID. You can kill the job with the command 'kill %1'.
+
Gory details
+
I have tried to learn the intricacies of coproc since deciding to use it, but can’t say that I fully understand all the details yet!
+
Using coproc for single line stuff is not too difficult, but seems quite pointless. However, things get a lot more complex when dealing with coprocesses that receive and send multiple lines of data.
+
The issues which complicate matters are:
+
+
It’s your coprocess so you know whether it expects input or not. You assume it’s receiving what you send by how it responds. There may be buffering problems that complicate this.
+
When you’ve finished sending stuff to the coprocess and you want to tell it it’s all done you must close the input file descriptor. You do this with the rather arcane command 'exec {NAME[1]}>&-' where 'NAME' is the name you assigned or was assigned for you ('COPROC'). This is an example of the operator described in the Bash Reference Manual under Duplicating File Descriptors. Though the manual doesn’t explain how you use that operator with a FD held in an array! This subject is on the list for the Bash Tips series in due course.
+
You can’t really tell how much output to read from the coprocess, and you may be dealing with something that performs I/O buffering so it will hold on to its output unexpectedly. This can cause a deadlock if you get it wrong! A deadlock is where the parent and child processes are both waiting for the other process to do something.
+
+
I have written a coproc example in the form of a Bash script. It’s called coproc_test.sh:
+
#!/bin/bash -
+
+#
+# Use bash in the coprocess but turn off buffering
+#
+process='stdbuf -i0 -o0 -e0 bash'
+
+#
+# Indexed array of bash commands
+#
+declare -a com=('date +%F' 'whoami' 'id' 'echo "$BASH_VERSION"'
+ 'printf "Hello\nWorld\n"')
+
+#
+# Count commands in the array
+#
+n="${#com[@]}"
+
+#
+# Start the coprocess
+#
+coproc child { $process; }
+
+#
+# Loop though the commands
+#
+i=0
+while [[ $i -lt $n ]]; do
+ # Show the command
+ echo "\$ ${com[$i]}"
+
+ # Send to coprocess
+ echo "${com[$i]}" >&"${child[1]}"
+
+ # Read a line from the coprocess
+ read -u "${child[0]}" -r results
+
+ # Show the line received
+ echo "$results"
+
+ ((i++))
+done
+
+#
+# Send an EOF to the coprocess (if needed)
+#
+if [[ -v child_PID ]]; then
+ echo "-- End --"
+ exec {child[1]}>&-
+
+ # Flush any remaining results
+ cat <&"${child[0]}"
+fi
+
The key points are:
+
+
The coprocess is running an instance of bash which expects commands and returns results
+
The command to be run is preceded by 'stdbuf -i0 -o0 -e0' which turns off all buffering
+
Commands to be sent to it are in an array and are fed to it one at a time
+
I use a counter and a while loop to do this, but could just as well have used for (( i = 1; i < n; i++ )).
+
Each time a command is sent one line is read back (using a read command on the FD in array child[0]) and displayed
+
An end of file is sent to the coprocess by closing the input channel, but only if it’s still running
+
I use a cat command to flush any remaining output after closing the FD.
+
Most of the commands generate one line of output, but the last one: 'printf' creates two. Only one is read in the loop, but the cat returns it after the input FD has been closed.
+
What would happen if a command was sent which produced no output? Try adding ‘:’ to the list of commands (this being a “null” command in Bash). The script will hang waiting for output that will never come. Adding a timeout to the read could be a way to avoid this problem.
+
+
Running this script gives the following output:
+
$ date +%F
+2021-08-21
+$ whoami
+cendjm
+$ id
+uid=1000(cendjm) gid=1000(cendjm) groups=1000(cendjm),4(adm),24(cdrom),25(floppy),29(audio),30(dip),44(video),46(plugdev),108(netdev),110(lpadmin),114(bluetooth),115(scanner)
+$ echo "$BASH_VERSION"
+5.1.4(1)-release
+$ printf "Hello\nWorld\n"
+Hello
+-- End --
+World
+
+
Note that the two-line printf at the end gets its first line displayed in the loop, then the loop ends and the script detects that the coprocess is still running, writes '-- End --' and then flushes the remaining line.
+
Also note that the job control messages we saw in the simple example above are disabled by Bash when running coproc out of a script.
+
I’m not sure that this example shows anything useful, however. It seems more of a novelty than anything else!
+
Coprocesses in gawk
+
In the spirit of enquiry I wrote a brief gawk script to do largely the same as the previous example. The coprocess features are not available in plain awk, they are a GNU extension. This script, called coproc_test.awk has been included in the resources for this show, where it can be downloaded.
+
I will not cover it any further in this show.
+
My eventual script using coproc
+
I found out how to run coproc to do what I wanted, but I spent a long time working out how to do it. Looking back, I got an HPR show out of it (though I doubt whether anyone will thank me for it!), and answered my question, but it probably wasn’t worth it.
+
The eventual script is presented here and is a resource available for download (show_queue_db_2).
+
#!/bin/bash -
+#===============================================================================
+#
+# FILE: show_queue
+#
+# USAGE: ./show_queue
+#
+# DESCRIPTION: Show the pending queue, expanding each album's details from
+# the database.
+#
+# / This version calls sqlite3 once and feeds it queries in a loop /
+#
+# OPTIONS: ---
+# REQUIREMENTS: ---
+# BUGS: ---
+# NOTES: ---
+# AUTHOR: Dave Morriss (djm), Dave.Morriss@gmail.com
+# VERSION: 0.2.1
+# CREATED: 2020-09-15 12:38:03
+# REVISION: 2021-08-05 16:39:05
+#
+#===============================================================================
+
+set -o nounset # Treat unset variables as an error
+
+SCRIPT=${0##*/}
+#DIR=${0%/*}
+
+VERSION='0.2.1'
+
+#
+# Files and directories
+#
+BASEDIR="$HOME/MusicDownloads"
+DATADIR="$BASEDIR/Magnatune_Data"
+SCRIPTDIR="$BASEDIR/magnatune-downloader"
+
+QUEUE="$SCRIPTDIR/pending"
+DB="$DATADIR/sqlite_normalized.db"
+
+#
+# Sanity checks
+#
+[ -e "$QUEUE" ] || { echo "$QUEUE not found"; exit 1; }
+
+#
+# Check the queue contains data
+#
+if [[ ! -s $QUEUE ]]; then
+ echo "$SCRIPT($VERSION): there is nothing in the queue"
+ exit
+fi
+
+RE='^http://magnatune.com/artists/albums/([A-Za-z0-9-]+)/?$'
+
+#
+# Template SQL for printf
+#
+SQLtemplate=$(cat <<ENDSQL1
+SELECT
+ ar.name AS Artist,
+ al.name AS Album,
+ group_concat(distinct ge.name) AS 'Genre',
+ group_concat(distinct sg.name) AS 'Subgenre',
+ al.sku as Code
+FROM albums al
+JOIN artists ar ON al.artist_id = ar.artists_id
+JOIN genres_albums ga on al.album_id = ga.album_id
+JOIN genres ge ON ge.genre_id = ga.genre_id
+JOIN subgenres_albums sa on al.album_id = sa.album_id
+JOIN subgenres sg ON sg.subgenre_id = sa.subgenre_id
+GROUP BY al.album_id
+HAVING sku = '%s';
+.print '--------'
+ENDSQL1
+)
+
+#
+# Start the coprocess
+#
+coproc dbproc { stdbuf -i0 -o0 sqlite3 -line "$DB"; }
+
+#
+# Read and report the queue elements.
+#
+n=0
+while read -r URL; do
+ ((n++))
+
+ if [[ $URL =~ $RE ]]; then
+ SKU="${BASH_REMATCH[1]}"
+ else
+ echo "Problem parsing URL in queue (line $n): $URL"
+ continue
+ fi
+
+ #
+ # Build the query and write it to the coprocess
+ #
+ # shellcheck disable=SC2059
+ printf "$SQLtemplate\n" "$SKU" >&"${dbproc[1]}"
+
+done < "$QUEUE"
+
+#
+# Close the input pipe (a file descriptor move documented as '[n]>&digit-'
+# which "moves the file descriptor digit to file descriptor n, or the standard
+# output (file descriptor 1) if n is not specified". There is no digit here,
+# so presumably /nothing/ is being moved to the file descriptor in dbproc[1].
+#
+exec {dbproc[1]}>&-
+
+#
+# Collect everything from the coprocess
+#
+cat <&"${dbproc[0]}"
+
+# vim: syntax=sh:ts=8:sw=4:ai:et:tw=78:fo=tcrqn21
+
Salient points are:
+
+
The query is stored in the variable SQLtemplate in the form of a format string for printf. This lets me substitute a SKU value each time it’s used. The string consists of a SQL query and a SQLite ‘dot’ command (.print) which I use to print a line of hyphens between each album summary.
+
The coprocess consists of a sqlite3 command preceded by a stdbuff call which turns off all buffering.
+
In the loop which is reading the queue we generate a new query on each iteration and use printf to produce it and write it to the coprocess. We do not read back from the coprocess in the loop.
+
Once the loop is finished we close the input pipe then use cat to collect all that’s available from the coprocess and display it.
+
+
Running this script gives the following output (truncated after the first 12 lines):
+
Artist = Antiqcool
+ Album = Original Instrumental Acoustic Guitar Songs Vol 1
+ Genre = Alt Rock
+Subgenre = Folk,Instrumental New Age
+ Code = antiqcool-acousticguitar
+--------
+ Artist = Mokhov
+ Album = Jupiter Melodies
+ Genre = Electronica
+Subgenre = Electro Rock,Electronica,Instrumental Classical
+ Code = mokhov-jupitermelodies
+--------
+
+
I suspect that the strategy of feeding data to the coprocess in a loop but not reading from it until the loop has ended might be dangerous. I think this relies on the fact that the pipe will store data, but it’s not clear what limit there is on such storage. It’s possible that it’s fairly small, and that this script could fail if the queue was long.
+
It might be possible to avoid this problem by reading data from the coprocess on each loop iteration using read with a timeout. Detecting the timeout could possibly be used to determine that the output pipe is empty and it’s time to write to the input pipe again.
+
I have not tried this idea though - it feels very clunky!
+
Conclusion
+
An interesting voyage, but:
+
+
I still don’t fully understand what 'exec {NAME[1]}>&-' means. I know what it does, but don’t get the syntax!
+
The general conclusion I find from various sources are:
+
+
named pipes are better
+
if you want to interact with a command use something like expect (I spent several years using expect and expectk in my job)
+
see the Stack Exchange reference below for more details
+
+
For this application I can write a much simpler Perl script that connects to the SQLite database, prepares a query with a substitution point and repeats the call with different values without a coprocess (available as a resource with this episode with the name show_queue.pl.zip). Many other programming solutions are also available. I do not believe that this is a task for Bash.
+
+
I’m in general agreement with clacke that Bash coproc is a feature looking for a use!
+
+
diff --git a/eps/hpr3413/hpr3413_show_queue.pl b/eps/hpr3413/hpr3413_show_queue.pl
new file mode 100755
index 0000000..c4ebeb1
--- /dev/null
+++ b/eps/hpr3413/hpr3413_show_queue.pl
@@ -0,0 +1,146 @@
+#!/usr/bin/env perl
+#===============================================================================
+#
+# FILE: show_queue.pl
+#
+# USAGE: ./show_queue.pl
+#
+# DESCRIPTION: Perform what the old 'show_queue' Bash script used to do but
+# using the new Magnatune SQLite database. Do it in Perl because
+# using a Bash coprocess is a pain.
+#
+# OPTIONS: ---
+# REQUIREMENTS: ---
+# BUGS: ---
+# NOTES: ---
+# AUTHOR: Dave Morriss (djm), Dave.Morriss@gmail.com
+# VERSION: 0.0.1
+# CREATED: 2021-08-05 18:40:22
+# REVISION: 2021-08-05 22:19:38
+#
+#===============================================================================
+
+use 5.010;
+use strict;
+use warnings;
+use utf8;
+use feature qw{ postderef say signatures state };
+#no warnings qw{ experimental::postderef experimental::signatures } ;
+
+use DBI;
+
+use Data::Dumper;
+
+#
+# Version number (manually incremented)
+#
+our $VERSION = '0.0.1';
+
+#
+# Script and directory names
+#
+( my $PROG = $0 ) =~ s|.*/||mx;
+( my $DIR = $0 ) =~ s|/?[^/]*$||mx;
+$DIR = '.' unless $DIR;
+
+#-------------------------------------------------------------------------------
+# Declarations
+#-------------------------------------------------------------------------------
+#
+# Constants and other declarations
+#
+my $basedir = "$ENV{HOME}/MusicDownloads";
+my $datadir = "$basedir/Magnatune_Data";
+my $scriptdir = "$basedir/magnatune-downloader";
+my $queue = "$scriptdir/pending";
+my $db = "$datadir/sqlite_normalized.db";
+
+my ( $dbh1, $sth1, $h1 );
+my ( $fmt, $count, $sku );
+
+#
+# Enable Unicode mode
+#
+binmode STDOUT, ":encoding(UTF-8)";
+binmode STDERR, ":encoding(UTF-8)";
+
+my $dbh
+ = DBI->connect( "dbi:SQLite:dbname=$db", "", "",
+ { AutoCommit => 1, sqlite_unicode => 1, } )
+ or die $DBI::errstr;
+
+my $re = '^http://magnatune.com/artists/albums/([A-Za-z0-9-]+)/?$';
+
+#
+# Define the query we need
+#
+my $sql = q{
+SELECT
+ ar.name AS artist,
+ al.name AS album,
+ group_concat(distinct ge.name) AS genre,
+ group_concat(distinct sg.name) AS subgenre,
+ al.sku as code
+FROM albums al
+JOIN artists ar ON al.artist_id = ar.artists_id
+JOIN genres_albums ga on al.album_id = ga.album_id
+JOIN genres ge ON ge.genre_id = ga.genre_id
+JOIN subgenres_albums sa on al.album_id = sa.album_id
+JOIN subgenres sg ON sg.subgenre_id = sa.subgenre_id
+GROUP BY al.album_id
+HAVING sku = ?;
+};
+
+#
+# Format string for printf
+#
+$fmt = "%-9s %s\n";
+
+#
+# Open the queue
+#
+open( my $qfh, '<', $queue ) or die "Unable to open $queue`n`";
+
+#
+# Set up the query for repeated calls
+#
+$sth1 = $dbh->prepare($sql) or die $DBI::errstr;
+
+#
+# Loop through the queue, reporting the details for each album
+#
+$count = 0;
+while ( my $url = <$qfh> ) {
+ $count++;
+
+ chomp($url);
+
+ #
+ # Parse the URL for the SKU component and use it to search. Skip if the
+ # parsing fails or the SKU is not found
+ #
+ if ( ($sku) = ( $url =~ $re ) ) {
+ $sth1->execute($sku);
+ if ( $h1 = $sth1->fetchrow_hashref() ) {
+ printf $fmt, 'Artist:', $h1->{artist};
+ printf $fmt, 'Album:', $h1->{album};
+ printf $fmt, 'Genre:', $h1->{genre};
+ printf $fmt, 'Subgenre:', $h1->{subgenre};
+ printf $fmt, 'Code:', $h1->{code};
+ print '-' x 9, "\n";
+ }
+ else {
+ say "Could not find SKU $sku";
+ }
+ }
+ else {
+ say "Problem parsing URL in queue (line $count): $url";
+ }
+}
+
+close($qfh);
+
+exit;
+
+# vim: syntax=perl:ts=8:sw=4:et:ai:tw=78:fo=tcrqn21:fdm=marker
+
diff --git a/eps/hpr3413/hpr3413_show_queue.pl.zip b/eps/hpr3413/hpr3413_show_queue.pl.zip
new file mode 100755
index 0000000..6aa95c7
Binary files /dev/null and b/eps/hpr3413/hpr3413_show_queue.pl.zip differ
diff --git a/eps/hpr3413/hpr3413_show_queue_db_1 b/eps/hpr3413/hpr3413_show_queue_db_1
new file mode 100755
index 0000000..eadd1b6
--- /dev/null
+++ b/eps/hpr3413/hpr3413_show_queue_db_1
@@ -0,0 +1,126 @@
+#!/bin/bash -
+#===============================================================================
+#
+# FILE: show_queue
+#
+# USAGE: ./show_queue
+#
+# DESCRIPTION: Show the pending queue, expanding each album's details from
+# the database.
+#
+# / This version calls sqlite3 repeatedly in a loop /
+#
+# OPTIONS: ---
+# REQUIREMENTS: ---
+# BUGS: ---
+# NOTES: ---
+# AUTHOR: Dave Morriss (djm), Dave.Morriss@gmail.com
+# VERSION: 0.1.1
+# CREATED: 2020-09-15 12:38:03
+# REVISION: 2020-09-15 12:38:11
+#
+#===============================================================================
+
+set -o nounset # Treat unset variables as an error
+
+SCRIPT=${0##*/}
+#DIR=${0%/*}
+
+VERSION='0.1.1'
+
+#=== FUNCTION ================================================================
+# NAME: cleanup_temp
+# DESCRIPTION: Cleanup temporary files when a 'trap' command is triggered
+# PARAMETERS: * - names of temporary files to delete
+# RETURNS: Nothing
+#===============================================================================
+function cleanup_temp {
+ for tmp in "$@"; do
+ [ -e "$tmp" ] && rm --force "$tmp"
+ done
+ exit 0
+}
+
+#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+STDOUT="/dev/fd/2"
+
+#
+# Files and directories
+#
+BASEDIR="$HOME/MusicDownloads"
+DATADIR="$BASEDIR/Magnatune_Data"
+SCRIPTDIR="$BASEDIR/magnatune-downloader"
+
+QUEUE="$SCRIPTDIR/pending"
+DB="$DATADIR/sqlite_normalized.db"
+
+#
+# Sanity checks
+#
+[ -e "$QUEUE" ] || { echo "$QUEUE not found"; exit 1; }
+
+#
+# Check the queue contains data
+#
+if [[ ! -s $QUEUE ]]; then
+ echo "$SCRIPT($VERSION): there is nothing in the queue"
+ exit
+fi
+
+#
+# Make temporary files and set traps to delete them
+#
+TMP1=$(mktemp) || { echo "$SCRIPT: creation of temporary file failed!" >$STDOUT; exit 1; }
+trap 'cleanup_temp $TMP1' SIGHUP SIGINT SIGPIPE SIGTERM EXIT
+
+RE='^http://magnatune.com/artists/albums/([A-Za-z0-9-]+)/?$'
+
+#
+# Add a partial SQL query that counts the feeds that match the regex. Store it
+# in a temporary file
+#
+cat > "$TMP1" <&"${dbproc[1]}"
+
+done < "$QUEUE"
+
+#
+# Close the input pipe (a file descriptor move documented as '[n]>&digit-'
+# which "moves the file descriptor digit to file descriptor n, or the standard
+# output (file descriptor 1) if n is not specified". There is no digit here,
+# so presumably /nothing/ is being moved to the file descriptor in dbproc[1].
+#
+exec {dbproc[1]}>&-
+
+#
+# Collect everything from the coprocess
+#
+cat <&"${dbproc[0]}"
+
+# vim: syntax=sh:ts=8:sw=4:ai:et:tw=78:fo=tcrqn21
diff --git a/eps/hpr3413/hpr3413_show_queue_orig b/eps/hpr3413/hpr3413_show_queue_orig
new file mode 100755
index 0000000..aabcd6e
--- /dev/null
+++ b/eps/hpr3413/hpr3413_show_queue_orig
@@ -0,0 +1,78 @@
+#!/bin/bash -
+#===============================================================================
+#
+# FILE: show_queue
+#
+# USAGE: ./show_queue
+#
+# DESCRIPTION: Show the pending queue, expanding each album's details from
+# the 'all_albums' file
+# / This is the version for the Magnatune XML catalog /
+#
+# OPTIONS: ---
+# REQUIREMENTS: ---
+# BUGS: ---
+# NOTES: ---
+# AUTHOR: Dave Morriss (djm), Dave.Morriss@gmail.com
+# VERSION: 0.0.2
+# CREATED: 2017-11-14 16:31:46
+# REVISION: 2018-11-24 14:51:57
+#
+#===============================================================================
+
+set -o nounset # Treat unset variables as an error
+
+SCRIPT=${0##*/}
+#DIR=${0%/*}
+
+#
+# Files and directories
+#
+BASEDIR="$HOME/MusicDownloads"
+DATADIR="$BASEDIR/Magnatune_Data"
+SCRIPTDIR="$BASEDIR/magnatune-downloader"
+
+SUMMARY="$DATADIR/all_albums"
+QUEUE="$SCRIPTDIR/pending"
+
+#
+# Sanity checks
+#
+[ -e "$QUEUE" ] || { echo "$QUEUE not found"; exit 1; }
+[ -e "$SUMMARY" ] || { echo "$SUMMARY not found"; exit 1; }
+
+#
+# Check the queue contains data
+#
+if [[ ! -s $QUEUE ]]; then
+ echo "$SCRIPT: there is nothing in the queue"
+ exit
+fi
+
+RE='^http://magnatune.com/artists/albums/([A-Za-z0-9-]+)/?$'
+
+#
+# Read and report the queue elements
+#
+n=0
+while read -r URL; do
+ ((n++))
+
+ if [[ $URL =~ $RE ]]; then
+ SKU="${BASH_REMATCH[1]}"
+ else
+ echo "Problem parsing URL in queue (line $n): $URL"
+ continue
+ fi
+
+ #
+ # The SKU should be the entire thing, so we surround it with word
+ # boundaries so we don't match other variants.
+ #
+ awk 'BEGIN{RS = "\n\n"; ORS = "\n----\n"} /Code: +\<'"$SKU"'\>/{print}' "$SUMMARY"
+done < "$QUEUE"
+
+exit
+
+# vim: syntax=sh:ts=8:sw=4:ai:et:tw=78:fo=tcrqn21
+
diff --git a/eps/hpr3445/hpr3445_full_shownotes.html b/eps/hpr3445/hpr3445_full_shownotes.html
new file mode 100755
index 0000000..44ab532
--- /dev/null
+++ b/eps/hpr3445/hpr3445_full_shownotes.html
@@ -0,0 +1,420 @@
+
+
+
+
+
+
+
+ True critical thinking seems to be the key (HPR Show 3445)
+
+
+
+
+
+
+
+
+
True critical thinking seems to be the key (HPR Show 3445)
+
A response to HPR 3414
+
Dave Morriss
+
+
+
+
+
+
+
Table of Contents
+
+
+
A response to Critical Thinking may make You Critical of the Covid Crisis
+
(HPR episode 3414, produced by CoGo and released on 2021-09-02)
+
Defining terms
+
+
What is Critical Thinking?
+
+
The Wikipedia definition begins: “Critical thinking is the analysis of facts to form a judgment.”
+
It goes on to say: “The subject is complex, and several different definitions exist, which generally include the rational, skeptical, unbiased analysis, or evaluation of factual evidence.”
+
See the references below.
+
+
+
+
Note the use of the terms fact, factual evidence and unbiased analysis. It is my contention that HPR episode 3414 fails in these regards in several places.
+
+
+
What is an “experiment”?
+
+
Wikipedia’s definition begins: “An experiment is a procedure carried out to support or refute a hypothesis. Experiments provide insight into cause-and-effect by demonstrating what outcome occurs when a particular factor is manipulated.”
+
+
+
+
The term experiment is often used incorrectly in episode 3414. A better term would be observation or anecdote
+
+
+
The virus:
+
+
The virus is a coronavirus. There are many viruses classified in this way.
+
The name of the virus is SARS-CoV-2. The SARS part stands for Severe Acute Respiratory Syndrome, the type of disease caused by the virus. CoV signifies that it is a coronavirus and the 2 means it’s the second SARS-type corona virus to have caused problems in the recent past. The other one, just called SARS occurred in 2003.
+
The name of the disease caused by SARS-CoV-2 is COVID-19. The letters COVID define it as a coronavirus disease. The 19 part is because it was first discovered in 2019.
Each point refers to an observation or argument made in the audio. The start and end times are noted in each case.
+
I have a degree in Biology and maintain an interest in the subject while working in Information Technology. During my education I was required to read and understand scientific papers and the arguments that they made. I have tried to use these methods to analyse the points made in show 3414 and to refer to relevant papers and articles to support my arguments.
+
Andrew, who joins me in this show, has a PhD in applying statistical methods to analysing solar and geomagnetic activity and a background in science and explaining it to the public and was a lecturer (aka professor) at the Open University where he taught a post-graduate course on statistics and wrote a book on astronomy for children published by Cambridge University Press. More recently, his book How Scotland Works, published by Luath Press, explores the ideas, politics and statistics that describe Scotland’s society and economy. He has been closely following the developments in COVID-19 particularly in Scotland and throughout the UK.
+
I have transcribed the audio of this episode and will make reference to parts of this transcription throughout this response.
+
Point 1: Social Distancing
+
From 00:01:34 to 00:02:22
+
+
I want to take you on a tour of thinking. I want to expose you to some very common experiments.
+
The news media used a spray bottle filled with a clear liquid that turns blue under UV light. They had someone stand six feet away and they sprayed the bottle in the subject’s direction. At six feet many large droplets made their way from the bottle to the subject. Because of this we have our six foot Social Distancing rule. If this proves anything it proves six feet is not enough. But if they told us that we couldn’t get within eighteen feet of another person, how far do you think that rule would get?
+
The face mask takes up the slack, right … right?
+
+
Dave’s response:
+
+
While it might be argued that this use of a spray bottle is an experiment, it’s not much of one. In what respect is a spray bottle a simulation of a human breathing or coughing? How does the liquid used relate to what comes out of a human mouth or nose?
+
I disagree that this simulates the human transmission of infectious materials, it only demonstrates that whatever the UV-reactive liquid being used in the demonstration is can travel further than six feet when sprayed from this particular device.
+
+
Andrew’s response
+
+
The point would be strengthened with a specific reference. We are only told it was in the “news media”.
+
The implication is that the six foot distancing rule is based on this simple, flawed experiment. There are much more rigorous experiments, which I’ll refer to when we come to masks, but no experiment can tell us what distance we should specify. Indeed, although the UK quotes it as 2 metres, similar to the US, several European countries went with a 1 or 1.5 metre rule. These details and others are aired in this debate in the UK parliament.
+
A six foot distancing rule is intended to be a simple and easy to understand measure that will help reduce transmission. It is commonly understood that droplets or aerosols can exceed that distance.
If you wear eyeglasses you have already done the next experiment many times. If you don’t wear eyeglasses you can still observe this experiment. When you come out of a grocery store on a cold day, stop for a while and watch those who are coming in. Those wearing eyeglasses will have their eyeglasses fogged up; you already know why I know.
+
If everybody’s breath is going around their masks already, what good is a second mask going to do? You can answer that one easily. How much more effective is an N95 mask on your face over an N95 mask in your pocket? A little.
+
When I spray paint, if the mask seals to my face I won’t smell the thinner, but when it doesn’t quite seal, I can smell the thinner. If you can smell the coffee… Well air should never go around the mask.
+
+
Dave’s response:
+
+
Again, the observations made here are not an experiment, but nevertheless this is a question worth asking. Does having a poorly fitting mask completely cancel out the effect of wearing a mask?
+
+
Different types of masks will have different levels of effectiveness.
+
Masks need to be worn properly to optimise effectiveness - so people with their noses outside their masks are wearing them improperly, as are people who wear them on their chins!
+
The readily-available mask types such as disposable surgical masks and washable cloth masks will reduce the “respiratory cloud” and therefore will lessen the likelihood of a COVID-19 carrier spreading the virus.
+
Have a look at the first paper below for a detailed analysis.
+
+
+
Andrew’s response:
+
+
It is true that a mask is not 100% effective in stopping transmission of the virus but this point risks being a strawman argument because very few people make such a claim. In fact, a government minister who tried to claim mask wearing was nearly 100% effective in stopping transmission had to retract the claim under pressure from journalists and the public as the infographic he based it on had no valid source.
+
I wear masks but am unsure about their efficacy. This is because there are so many factors in play that real world usage is very difficult to assess or simulate experimentally. How many people are present? What type of mask? How are they being worn and handled when removed? What is the ventilation like? That’s just to name but a few. This article published in the journal Nature is typical of many experiments and finds some surprising results.
+
Even if a mask was, say, 50% effective, then it is still a measure worth considering in concert with others. The statement in the previous point that the “face mask takes up the slack” is too simplistic. There are many measures that can and are considered.
The next experiment I do nearly every day. I make a cup of coffee and I put milk into it. You can probably do this with tea also. If you pour the milk in along the edge of the cup, you don’t need to stir it with a spoon. With the right cup the milk will be completely mixed in. Why is this important? If you put a Covid patient wearing a mask in the corner of a room the air they breathe will be stirring up the room. It won’t be as complete as the milk gets mixed but it will get some mixing done.
+
+
+
The next experiment requires the weather to cooperate, but hopefully you can recall a previous version of this experiment. It concerns water in the air. When water in the air is in large groups or drops it falls to the ground very quickly. When the drops are really tiny they have very little weight but proportionately great wind drag. This allows the tiny drops to spend a lot more time in the air, before hitting the ground. The drops that come out of a person’s mouth are very tiny indeed. Combine their time in the air with the breathing causing the mixing and you have six feet and masks adding up to a very short safe time in an enclosed area.
+
+
+
The other day I saw two people travelling in a car with masks on. If they are from different families and are brought together for some task that requires them to travel together the media would have them wear masks to keep safe. If you’ve been paying attention you’d know that if they had different viruses in their systems before the trip they were sharing those viruses after the trip.
+
+
Dave’s response:
+
+
No experiments here, however, some observations worth discussing. Yes, an infected person in a closed, poorly ventilated room will spread viruses in the atmosphere.
+
Yes, human breath contains some very fine aerosols which may contain infective agents
+
This is the reason why the advice from all of the sources is to:
+
+
Avoid situations where large numbers of people are congregating in indoor environments
+
Boost ventilation in indoor environments as much as possible
+
Use masks and suitable distancing in indoor environments
+
+
+
Andrew’s response:
+
+
The description given here is accurate enough and even though it doesn’t reference any research, the real-world, everyday analogies are appropriate.
+
This point uses the same kind of strawman argument as before, in that it is rebutting the claim that masks plus social distancing are effective by themselves. This claim is not widespread. In some situations such measures may reduce transmission significantly, such as passing through a room or making a brief visit to a shop, but in others they may not, say, when spending hours in a small badly ventilated room with many people.
There are three web pages that I want you to know about. Two of them testify of the importance of Vitamin D3 to your immune system, and one of them testifies to the importance of body temperature to someone exposed to Covid, or any other virus.
+
4000 to 5000 IU is the recommended dose for winter time, but I talked with someone whose Doctor recommended 45,000 IU for a short time to get her D3 up to a safe level.
+
Oh, here’s another experiment that happens every year, even those who want you to get a vaccine admit it. When October came around last year, even those advocating for a vaccine predicted a second wave of Covid infection. In order for a second wave to happen there had to be a receding of the first wave. That would have been during the experiment in the summer. History records this experiment every year, not just for Covid but for all viruses. Flu season takes a break in the summer. That doesn’t mean you can’t get the flu during the summer, but it is a lot harder. The politicians don’t want you to think about how the sunshine increases Vitamin D3 in your system and keeping your body temperature warm slows the growth of viruses. I want you to ask yourself why the flu takes a break in the summer, and how can we keep it going through the fall and winter. I’ve mentioned the two reasons I can think of.
+
If you duck up (using Duck Duck Go) “COVID-19”, “Doctor” and “Clinical Trial” you’ll find the first web page, a YouTube video. A hospital in Spain did a double blind study with patients who came in with Covid symptoms. All 76 got normal hospital treatment for Covid, but 50 of them also got Vitamin D3. It’s admittedly a small study, but the score: 7.6% death rate without the D3 and 0% death rate with D3 means it deserves to be repeated all around the world.
+
If you duck up “RadioLab Podcast” and “invisible allies” you’ll find the RadioLab episode of the same name. This episode suggests that Vitamin D3 helped the homeless population weather the Covid outbreak. How few homeless came down with the Covid symptoms is notable.
+
+
Dave’s response:
+
+
There has been a lot of discussion about the role of Vitamin D3 in the lessening of the effects of COVID-19. Yes, Dr John Campbell has discussed this on a YouTube video which is referenced in the show notes (https://www.youtube.com/watch?v=V8Ks9fUh2k8). Unfortunately the Spanish Clinical trial mentioned in the video is too small to give enough confidence in its results, and other trials have so far proved inconclusive.
+
However, the National Health Service (NHS) in the UK has been recommending the taking of Vitamin D3 supplements for elderly and immunocompromised people, and has been providing free access to supplies during the winter.
+
Dr Anthony Fauci is on record as having said he takes a Vitamin D3 supplement himself.
+
The conclusion seems to be that supplements should be taken, but in addition to vaccination and certainly not instead of vaccination.
+
The argument that homeless people have avoided COVID-19 due to higher vitamin D3 levels is unsupported.
+
+
Andrew’s response:
+
+
Whether or not Vitamin D3 helps protect against COVID-19, it has been clear for many years that many people who live at higher latitudes (I live just short of 56°N) do not get enough exposure to sunlight in the winter and so have a deficiency in this vitamin. In Scandinvian countries this is offset by a diet naturally rich in Vitamin D3 but in the UK and especially Scotland many people really need to either change their diets or take supplements. This advice has fallen on deaf ears and even if the effect of Vitamin D3 on COVID-19 was as clear cut, I suspect it would still continue to be ignored.
+
The point is made that viruses recede during the summer months. The usual and most obvious explanation for this is not mentioned: people spend more time indoors with the windows closed in winter.
+
Further, the first COVID-19 wave began in spring in the northern hemisphere and continued into the summer. The most recent wave had a peak in summer months in several countries, including the UK, US and France. This is not usual for a respiratory virus or a flu virus. This can be verified by the data published on cases by Our World in Data.
If you duck up “Corona virus”, “2003” and “BMR” you’ll find a web page where a medical professional points out the importance of staying warm to fight Covid. This knowledge is from 2003 and a previous Covid outbreak. We learn from history that we don’t learn from history. But medical professionals should be required to answer for this information from 2003.
+
When I was a kid, if you came in wet from winter weather, your Mom would say something like “Get out out those wet clothes before you catch your death of cold.” After this some people calling themselves scientists said “You don’t get a cold from being cold, you get a cold from a virus.” Unfortunately we’ve built a society on this misinformation. Though there’s some truth to this, those who paid attention knew that being cold for a length of time could lead to catching the flu. Now there is evidence that many if not all viruses replicate faster if your body temperature is reduced by 5°(F) or so. Spiking a fever is probably a way for your body to fight off a virus. Some people assert that a fever, if it’s less than 104°F should be encouraged. How do people get their temperature down by 5°? The group of people in Texas who got Covid together worked in a meat packing plant. Cold extremities? Probably. Another method to reduce the temperature of people’s extremities is to take them to a hospital.
+
Most of us have had the experience of being cold in a hospital room. There’s valid scientific reason for this. The air is kept cold around beds made with stainless steel to keep condensation from forming, and to keep bacteria from growing on parts of the bed. While this is important, it’s also important for the patient’s body temperature to be maintained. One solution would be to supply each bed with an electric blanket.
+
+
Dave’s response:
+
+
There’s little evidence that being cold in the sense being used here has any effect on susceptibility to viruses (or other agents). Animal experiments have shown an effect of significant lowering of body temperature on the immune system, but nothing was found when looking for human information.
+
The medical professional cited in the notes for show 3414 was responding to a Hong Kong report into the original SARS virus. The opinion reported in this response was that cold might be a factor in the worsening of the disease. This is an opinion, not a clinical trial.
+
Note that the 2003 corona virus called SARS was not referred to as “COVID”.
+
My experience of hospitals in the UK (and other parts of Europe) is that these establishments are kept very warm, sometimes uncomfortably so.
+
+
Andrew’s response
+
+
As suggested, I did do some web searches on this topic and found that the evidence of the effect of body temperature was not at all clear cut. Again, there are many factors involved in virus transmission and the immune system and it is too simplistic to point to just one in isolation.
My government, and probably yours, wants everybody vaccinated. But they don’t trust the vaccines enough to hold Big Pharma accountable for the damage the vaccines cause. The unvaccinated, who already have antibodies for Covid, are on their list. But if they already have antibodies, what use is the vaccine to them? It’s an important question because there may be reasons why governments want people vaccinated other than health. If they are ignoring Vitamin D3 and body temperature, and concentrating on experimental vaccines, then public health is clearly not the issue. I think we need ambulance chaser lawyers for the Covid crisis. If someone has an ambulance chaser lawyer send a registered letter to a hospital or nursing home, detailing the importance of vitamin D3 and body temperature to fight Covid viruses, they will have to give patients vitamin D3 and keep them warm. Just a few institutions as targets are all that will be necessary, because the rates of serious infections will show the efficacy of this treatment. Once this information goes public, the ambulance chasers will be able to drain money from any institution that ignores this - possibly including governments.
+
If you’ve already had Covid and don’t want to get an experimental vaccine, you should get an antibody test. If you already have the antibodies for Covid, public health cannot be a reason for getting this experimental vaccine. An ambulance chaser lawyer can then drain money from whoever compels you to get the vaccine and then fires you for not getting it. If a company or school system or hospital compels their employees to get the vaccines, even though the drug companies are given immunity by governments, the company that requires vaccination should be held responsible for harmful side effects and death.
+
+
Dave’s response:
+
+
Since vaccines were invented they have been vital to prevent the spread of diseases. The list of diseases and their vaccines is long and getting longer: smallpox, cholera, diphtheria, polio, rabies, tetanus, tuberculosis, and so on. I was growing up in the 1950s when everyone was frightened of polio and diphtheria, a fact that even I as a child was aware of. See the article on NHS childhood vaccination. It is unbelievable that anyone in 2021 would wish to ignore or attempt to undermine such science.
+
The COVID-19 vaccine is not experimental. Vaccine technology has moved forward tremendously in recent years to the level that targeted vaccines can be made much faster than ever before. Several of the current vaccines use messenger RNA (mRNA) to make human cells generate the virus proteins which stimulate the immune system (e.g. Pfizer and Moderna). Others use existing harmless viruses which have been modified to cause human cells to generate these proteins (e.g. AstraZeneca, J&J). These vaccines can be developed a lot faster than previously, and the full range of normal clinical trials is being run at high speed in order to reach the required confidence level as rapidly as possible.
+
All vaccines have some risks associated with them, but these are almost always minimal. The NHS staff check for any allergies when you are receiving a vaccination and you are asked to remain nearby for 10 minutes in case you might have a serious allergic reaction. A very rare blood clotting problem has been reported in relation to the Oxford/AstraZeneca vaccine, and this is currently under investigation. The risk of getting COVID-19 is much higher than any vaccine side-effects – especially if you are older than 50 or have comorbidities.
+
It is advised that people who have had COVID-19 and who have antibodies to the virus be vaccinated to ensure that they have a safe level of immunity. It is possible that so-called natural immunity is not as effective as that provided by the vaccines. This would depend of factors like which variant had been caught, whether the illness was asymptomatic, and so forth.
+
As mentioned before, there are some indications that maintaining vitamin D3 helps support the immune system. There are similar indications that zinc also has effects. However, the maintenance of D3 and zinc levels are not a cure for the COVID-19 disease. Also the evidence for their effectiveness is still minimal. It is important to emphasise that these measures are not a substitute for the vaccine
+
+
Andrew’s response
+
+
Some countries are considering immunity passports showing that you have had the virus in addition to vaccine passports. However, it is easier to show that a person has had a vaccine than it is to show they have immunity or even just that they have had the virus and recovered mainly because testing has not been done consistently in time or across countries. So it is not surprising that relatively easy option of vaccine passports is being preferred.
+
Lacking antibodies does not necessarily mean a lack of immunity. A paper published in the journal Nature presents evidence that long lived immunity can arise from T-Cells and that such immunity can even apply across different corona viruses. Also, it is known that antibodies wane over time from infection, falling by a half in 90 days. For both reasons, an antibody test in itself would not be definitive on immunity.
+
Previous points argued that measures to prevent COVID-19 transmission are either only partially effective or being overlooked entirely, but at this point the argument jumps to the assertion that governments and big pharma, with some form of media cooperation, are encouraging us to get the vaccination for some ulterior motive. If taken at face value, the previous points are consistent with this but they do not justify such an assertion.
+
It is not said what this motive might be, but the way big pharma is mentioned implies it is profit. Why is this left vague?
+
“…the company that requires vaccination should be held responsible for harmful side effects and death.” Companies and governments should certainly be held to account for any harm they do, but no evidence has been presented of the vaccine causing harm here. Huge numbers of people have now been given vaccines in many countries and so we should now see hard evidence of such harm if it were real. If governments were somehow covering this up on an enormous scale that would require an amazing level of competence to control, far beyond the level of (in)competence I have seen from any government. But let’s say I’m wrong, and they have managed it: how is it that of all the people I know who have had the vaccine — which is now almost every adult that I know — not one of them have told me of anything more than a minor side effect.
Episode 3414 is in general misleading. It purports to be applying Critical Thinking to various aspects of the COVID-19 pandemic but in reality is propagating vague anecdotes at best and serious misinformation at worst. It is possible that this propagation of misinformation is well-meaning, but this type of thing should not be done without plentiful references to facts in the form of peer-reviewed scientific papers and items from properly qualified expert sources.
+
The part of episode 3414 separated out in this critique as Point 6 contains examples of conspiracy theories. The theory that in a pandemic governments are pushing vaccination for some nefarious purpose makes no sense. Neither does calling the vaccines experimental. No attempts are made to support such a case - because there is nothing to support it. This is a particular example of the failure of critical thinking and even plain common sense.
+
+
Andrew’s conclusions
+
+
This show is well-presented and some thought has gone into its structure. Rhetorically it is good.
+
But it is not logically sound. The argument is geared to persuade and the reasoning and evidence base is unsound.
+
“Critical thinking” most certainly involves questioning the orthodoxy, that is widely accepted thinking, and it is crucial to do that with those who wield power, including governments and corporations amongst others. But true critical thinking does not only target those with power. It should be applied to all arguments.
+
The true test of a critical thinker, and I’m open to challenge on this, is that they welcome criticism and will use it to improve their thinking.
COVID-19 false dichotomies and a comprehensive review of the evidence regarding public health, COVID-19 symptomatology, SARS-CoV-2 transmission, mask wearing, and reinfection:
+
+
+
diff --git a/eps/hpr3525/hpr3525_full_shownotes.html b/eps/hpr3525/hpr3525_full_shownotes.html
new file mode 100755
index 0000000..258abf2
--- /dev/null
+++ b/eps/hpr3525/hpr3525_full_shownotes.html
@@ -0,0 +1,314 @@
+
+
+
+
+
+
+
+ Battling with English - part 4 (HPR Show 3525)
+
+
+
+
+
+
+
+
+
Battling with English - part 4 (HPR Show 3525)
+
Some confusion with English plurals; strange language changes
+
Dave Morriss
+
+
+
+
+
+
+
Table of Contents
+
+
+
Confusing plurals
+
In this episode, the fourth of this series, I’m looking at some words that have singular and plural forms that are very different. These lead to a lot of confusion as we’ll see.
+
I also want to look at the way that English is evolving in some very strange and apparently senseless ways!
+
Personal note: I notice I started preparing this show in 2019; unfortunately, COVID messed up my productivity for the next two years, but I hope I can now begin to be productive again!
+
Nouns ending in “…is”
+
These words usually derive from Greek. This means they don’t conform to the usual English pattern of writing singulars and plurals.
+
Some examples: thesis, parenthesis, crisis, nemesis, axis.
+
+
+
+
+Singular
+
+
+Plural
+
+
+Common mistakes
+
+
+
+
+
+
+thesis
+
+
+theses
+
+
+thesises ✖, thesis’ ✖
+
+
+
+
+parenthesis
+
+
+parentheses
+
+
+
+
+parentheses for the plural but parenthese ✖ for the singular
+
+
+parenthesis for both singular and plural ✖
+
+
+
+
+
+
+crisis
+
+
+crises
+
+
+crisises ✖
+
+
+
+
+nemesis
+
+
+nemeses
+
+
+nemesises ✖
+
+
+
+
+axis
+
+
+axes
+
+
+Like parentheses, axes is used as the plural but axe ✖ for the singular
+
+
+
+
+
+
A mistake often made with these words is that people put es on the end of the singular form to make plurals, thus thesises. The rule here is that the is at the end is replaced by es. I included thesis’ with a possessive apostrophe on the end because I have seen this - someone very confused between unusual plurals and possessives.
+
The mistaken assumption that the plural parentheses must have a singular form parenthese error is remarkably common. The thinking seems to be that just removing the final s from the plural makes it singular.
+
I just watched a YouTube video where the presenter made the axis → axe error, so that one is out there too.
+
+
Nouns ending in “…a”
+
These are irregular plurals which (formally) end with "ae"1. Some examples follow. The plurals in italics are alternatives which are not used in formal contexts but have become accepted in informal ones:
+
+
+
+
+Singular
+
+
+Plural
+
+
+Common mistakes
+
+
+
+
+
+
+antenna
+
+
+antennae, antennas
+
+
+antenna ✖ is not a plural
+
+
+
+
+alga
+
+
+algae, algaes
+
+
+alga ✖ is not a plural
+
+
+
+
+formula
+
+
+formulae, formulas
+
+
+formula ✖ is not a plural
+
+
+
+
+larva
+
+
+larvae, larvas
+
+
+larva ✖ is not a plural
+
+
+
+
+nebula
+
+
+nebulae
+
+
+nebula ✖, is not a plural, nebulas ✖ not an accepted form
+
+
+
+
+nova
+
+
+novae, novas
+
+
+nova ✖ is not a plural
+
+
+
+
+vertebra
+
+
+vertebrae, vertebras
+
+
+vertebra ✖ is not a plural
+
+
+
+
+pupa
+
+
+pupae, pupas
+
+
+pupa ✖ is not a plural
+
+
+
+
+
There are other plurals that are confusing of course. As a Biology student I encountered words like proboscis (a Greek-derived word meaning a feeding tube such as an insect mouthpart or an elephant’s trunk). We were taught that the plural was proboscides, though nowadays proboscises is acceptable.
+
I’ll leave this subject here though - at least for the moment.
+
Some recent language evolution
+
There are two things I wanted to mention here, both of which I find strange, and being somewhat superannuated myself, don’t approve of:
+
The use of “is is”
+
You will hear people saying, for example: “The problem is is that it’s snowing.” Finding this construction written is rare (in my experience) but it is common in speech on TV and radio.
+
I recall people writing to the BBC to ask why speakers were doubling the word “is” and the response being it was just a common hesitation. That was not a good reply since now it’s everywhere and surely cannot be a verbal tic.
+
The sentence: “The question is, is it snowing?” is acceptable of course. The first “is” ends the phrase and the second starts the question.
+
See the references below discussing this oddity.
+
“Honing in”
+
This expression seems to be mishearing or mispronunciation of the phrase “Home in”. I’m not sure if this is a Mondegreen but if not, it should be!
+
There is a rather poor excuse that since “hone” means to sharpen or narrow (to a point) this is acceptable. I don’t find this acceptable myself, because otherwise we’d have:
+
+
Honing missiles
+
Honing pigeons
+
Sharpening in
+
+
Also, “homing”, “honing” and “sharpening” would be synonyms, and there would be expressions such as “The detective was sharpening in on the criminal.”
+
In such a world I would be leaving my blunt chisels out on the bird table in the hopes a “honing pigeon” would pass by and sharpen them.
I was taught to use the ligature æ. This is formed from the letters a and e, originates in Latin and was common in English at one time. It’s rare to see this in modern text in my experience.↩︎
Bash snippet - some possibly helpful hints (HPR Show 3551)
+
Using ‘eval’, ‘mapfile’ and environment variables
+
Dave Morriss
+
+
+
+
+
+
+
Table of Contents
+
+
+
Overview
+
I write a moderate number of Bash scripts these days. Bash is not a programming language as such, but it’s quite powerful in what it can do by itself, and with other tools it’s capable of many things.
+
I have enjoyed writing such scripts for many years on a variety of hardware and operating systems, and Bash is my favourite - partly because Linux itself is so flexible.
+
This is just a short show describing three things I tend to do in Bash scripts to assist with some tasks I find I need to undertake.
+
+
Generate Bash variables from a text file - usually output from a program
+
Fill Bash arrays with data from a file or other source
+
Use environment variables to control the Bash script’s execution
+
+
Tasks
+
Generating Bash variables
+
There’s a Bash command 'eval' that can be used to evaluate a string as a command (or series of commands). The evaluation takes place in the current shell, so anything returned - Bash variables in this case - is available to the current script.
+
This is different from setting variables in a sub-shell (child process). This is because such variables are local to the subshell, and disappear when it finishes.
+
The eval command takes a list of arguments which are concatenated into a string and the resulting string is evaluated.
+
The eval command is seen as potentially dangerous in that it will execute any command it is given. Thus scripts should take precautions that the command (or commands) are predictable. Do not write a Bash script that executes whatever is given to it!
+
One particular case I use eval for is to set variables from a text file. The file is generated from the HPR show upload process and I want to grab the title, summary and host name so I can generate an index file for any supplementary files uploaded with a show.
+
The file contains text like:
+
Host_Name: Dave Morriss
+Title: Battling with English - part 4
+Summary: Some confusion with English plurals; strange language changes
+
In my script the file name is in a variable RAWFILE, and I run the following command:
$ sed -n "/^\(Title\|Summary\|Host_Name\):/{s/^\([^:]\+\):\t/\1='/;s/$/'/;p}" "$RAWFILE"
+Host_Name='Dave Morriss'
+Title='Battling with English - part 4'
+Summary='Some confusion with English plurals; strange language changes'
+
The sed commands find any line beginning with one of the three keywords and generate output consisting of that keyword, an equals sign and rest of the line. Thus the matched lines are turned into VAR='string' sequences.
+
So, eval executes these and sets the relevant variables, which the script can access.
+
This method is not foolproof. If a string contains a single quote the eval will fail. For the moment I haven’t guarded against this.
+
Filling a Bash array
+
I have a need to fill a Bash indexed array with sorted filenames in a script that deals with pictures sent in with HPR shows.
+
I want to use the find command to find the pictures, searching for files which end in jpg, JPG, png or PNG. I don’t want to visit sub-directories. The command I want is:
The output from the find and sort commands in the command substitution expression will consist of a number of newline separated lines, but Bash will replace the newlines by spaces, so the array-defining parenthesised list will consist of space-delimited filenames, which will be placed in the array.
+
However, what if a file name contains a space? It’s bad practice, but it’s permitted, so it might happen.
+
This is where the mapfile command might help. This was introduced in episode 2739 where its options were described. Typing help mapfile in a terminal will show a similar description.
We use a process substitution here which preserves newlines. One of the features of mapfile that is useful in this context is the -t option which removes the default delimiter, a newline. The delimiter can be changed with the -d DELIM option. The text between delimiters is what is written to the array elements, so as long as there are no filenames with newlines in them this will be better than the previous method.
+
To be 100% safe the find command should use the -print0 option which uses a null character instead of the default newline, and mapfile should be changed to this delimiter. We also need to tell sort to use null as a line delimiter which is done by adding the -z option1.
What it doesn’t tell you in 'help mapfile' is that an empty string as a delimiter (-d '') causes mapfile to use a null delimiter. There doesn’t seem to be any other way to do this - or nothing I could find anyway. You can read this information in the Bash manpage.
+
Having discovered this information while preparing this show I shall certainly update my script to use it!
+
Turning debugging on
+
I find I need to add debugging statements to the more complicated scripts I write, and to help with this I usually define a fairly simple function to do it. Here’s what I often use:
+
_DEBUG () {
+ [ "$DEBUG" == 0 ] && return
+ for msg in "$@"; do
+ printf 'D> %s\n' "$msg"
+ done
+}
+
This uses a global variable 'DEBUG' and exits without doing anything if the variable contains zero. If it is non-zero the function prints its arguments preceded by 'D> ' to show it is debug output.
+
I add calls to this function throughout my script if I want to check that values are what I expect them to be.
+
The issue is how to turn debugging mode on and off. There are several ways, from the simplest (least elegant) to the most complicated in terms of coding.
+
+
Edit the script to set the DEBUG variable to 1 or zero
+
Set it through an external variable visible to the script
+
Add option processing to the script and use an option to enable or disable debug mode
+
+
I use tend to choice 3 when I’m already dealing with options, but if not then I use choice 2. This is the one I’ll explain now.
+
I use Vim as my editor, and in Vim I use a plugin ('BashSupport') with which can define boilerplate text to be added to scripts. I have configured this to generate various definitions and declarations whenever I create a new Bash script. One of the lines I add to all of my scripts is:
+
SCRIPT=${0##*/}
+
This takes the default variable $0 and strips off everything up to the last '/' character, thus leaving just the name of the script. I talked about these capabilities in show 1648.
+
I have recently started adding the following lines:
This defines a variable 'DEBUGVAR' which contains the name of the script concatenated with '_DEBUG'. Then, assuming the script name is testscript, the 'DEBUG' variable is defined to contain the contents of a variable called 'testscript_DEBUG'. The exclamation mark ('!') in front of the variable name causes Bash to use its contents, which is a form of indirection. If the indirected variable is not found a default of zero is set.
+
This means that debugging can be turned on by calling the script thus:
+
testscript_DEBUG=1 ./testscript
+
Variables set from the command line are visible to scripts. One set on the command line only lasts while the script (or command) is executing.
+
You could set it as an exported (environment) variable:
+
export testscript_DEBUG=1
+./testscript
+
and then it would continue after the script had run. I prefer not to do this.
+
I name my debug variables as I do so that there’s less chance of them affecting scripts other than the one I’m currently debugging!
+
Conclusion
+
These are just three things I have found myself using in recent Bash scripts, which I hope might prove to be useful to you.
+
If you have hints like this which you could share, please make an HPR show about them. We are always in need of shows, and at the time of writing (2022-02-26) we are particularly in need!
I don’t think I mentioned the need for sort -z in the audio. However later testing showed that this option is needed to sort the output properly.↩︎
+
+
+
+
+
+
+
diff --git a/eps/hpr3608/hpr3608_full_shownotes.html b/eps/hpr3608/hpr3608_full_shownotes.html
new file mode 100755
index 0000000..684abf8
--- /dev/null
+++ b/eps/hpr3608/hpr3608_full_shownotes.html
@@ -0,0 +1,333 @@
+
+
+
+
+
+
+
+ Battling with English - part 5 (HPR Show 3608)
+
+
+
+
+
+
+
+
+
Battling with English - part 5 (HPR Show 3608)
+
Confused homophones; misunderstanding words from other countries; Eggcorns
+
Dave Morriss
+
+
+
+
+
+
+
Table of Contents
+
+
+
Overview
+
This time I have three main subjects to discuss, all of them dealing with misunderstandings of words:
+
+
Mistakes made with homophones, one group of examples
+
+
The definition gets a little technical, see the Wikipedia description.
+
+
Misunderstandings of words from other languages
+
+
Pundit
+
+
Looking at Eggcorns (a name chosen from a misspelling of acorn)
+
+
Wikipedia: an alteration of a phrase through the mishearing or reinterpretation of one or more of its elements
+
+
+
Misunderstanding homophones
+
As we have seen, homophones are usually words that have different spellings but sound the same. One of the most often quoted groups of such words are reign, rein and rain, but there are many others. We will also look at passed and past as well as poring and pouring in a later show.
+
Here are some definitions (I have limited the meanings for the sake of brevity but the links here and below give access to more information if required):
meaning 1:to fall in drops of water from the clouds
+
+
+
+
+
+
+
+
+Correct usage
+
+
+Incorrect usage
+
+
+Comment
+
+
+
+
+
+
+rein in
+
+
+reign in
+
+
+To rein in is to slow something down or get it under control. It’s related to horse riding. So reign in 👎 is incorrect.
+
+
+
+
+free rein
+
+
+free reign
+
+
+The meaning of free rein to give complete freedom or to give full control. It’s again related to horse riding. So free reign 👎 is incorrect.
+
+
+
+
+Anarchy reigns supreme
+
+
+Anarchy reins supreme
+
+
+The meaning is that anarchy controls everything; there is nothing but anarchy but the incorrect version uses reins 👎.
+
+
+
+
+
Words from other languages
+
English, in common with many (or maybe all) languages, has absorbed words from other languages. We will look at one case in this episode, with some common errors probably caused by a misunderstanding of the word.
A pundit is a person who offers to mass media opinion or commentary on a particular subject area (most typically politics, the social sciences, technology or sport).
meaning 1:a learned person; an expert or authority
+
+
meaning 2:a person who makes comments or judgments in an authoritative manner
+
+
+
+
+
This word originates from the Hindi term pandit, meaning a learned man. It was brought into English from India, maybe as long ago as the 1600s.
+
In my younger days it was common to hear the then Prime Minister of the Republic of India, Jawaharlal Nehru, being referred to as Pandit Nehru on news programs and elsewhere.
+
+
+
+
+Original
+
+
+Incorrect
+
+
+Comment
+
+
+
+
+
+
+pundit
+
+
+pundant pundint
+
+
+Both are wrong: pundant 👎, pundint 👎
+
+
+
+
+
I was surprised to hear someone I follow on YouTube incorrectly saying ‘pundant’ within the past week. This error seems to be spreading for some reason.
+
Looking at Eggcorns
+
The name for this linguistic phenomenon was coined by Professor Geoffrey Pullum in 2003. It came from a discussion of a case where the phrase egg corn had been used instead of acorn.
+
The term is used to describe cases where someone uses analogy and logic to make sense of an expression which uses a term which is not meaningful to them. Eggcorns are of interest to linguists since they show language evolving, and indicate possible reasons why a change has occurred.
+
For example, the expression in one fell swoop might be replaced by in one foul swoop because fell is not much used in common parlance and foul appears to replace the meaning.
+
Here are a few eggcorns taken from the Eggcorn Database mentioned below.
+
+
+
+
+Eggcorn
+
+
+Original
+
+
+Comment
+
+
+
+
+
+
+damp squid
+
+
+damp squib
+
+
+A firework (squib) which has become wet and fails to go off. Used to describe something that doesn’t work properly or fails to meet expectations.
+
+
+
+
+for all intensive purposes
+
+
+(for|to) all intents and purposes
+
+
+For every functional purpose; in every practical sense; in every important respect. The original expression comes from English law in the 1500’s.
+
+
+
+
+old timer’s disease
+
+
+Alzheimer’s disease
+
+
+A neurodegenerative disease that is commonest in people over 65 years of age. The eggcorn is wrong, but almost makes sense.
+
+
+
+
+with baited breath
+
+
+with bated breath
+
+
+Usually means to wait with anticipation or excitement. Bated means restrained and is related to abated. The image is of one holding one’s breath in excitement.
+
+
+
+
+
If this subject interests you, have a look at the Eggcorn Database and Eggcorn Forum in the Links section below.
While we may be “dedicated to sharing knowledge”, we are competing
+for the time and attention of our Audience. Therefore we are in the
+Entertainment Business.
+
There’s no Business like …
+
The clue is in this statement from the about page.
+
+
Hacker Public Radio (HPR) is an Internet Radio show (podcast) that
+releases shows every weekday Monday through Friday.
+
+
Let’s compare that to others.
+
Any event promoter needs to provide the Who, What, Where, and When,
+to their potential audience.
+
+
U2, UV Achtung Baby Live playing the Las Vegas Sphere, from 23 to
+the 30 June 2024.
+
Richard III in the Globe Theatre, All Summer. (The resident players
+is implied)
+
BBC News at Ten. (The what and when are in the name, BBC News Team
+is implied, as is Daily)
+
Season 2 of Firefly, returns to Netflix in the Fall.
+
+
A theater will have an address and a schedule for when the events
+occur. On TV and radio they have predefined channel locations, and often
+have 24x7 schedule of programs.
+
For Hacker Public Radio (HPR) our “venue” or channel is our RSS Feed,
+and our schedule is a show every weekday Monday through Friday.
+
A podcast production enterprise, like the NPR, BBC, etc have
+permanent staff who’s day job is to come in and create content. Other
+approaches used by Netflix, or Disney+, etc is to commission external
+parties to record unique content. They might also just purchase in
+shows. Regardless of the approach, they all have a mechanism to meet the
+production schedule.
+
Unlike other podcasters, HPR has no control of our supply chain. We
+do still have a contract to deliver one “product” a day Monday to
+Friday, but we also have no control over our distribution channels.
+
I think it’s important to understand just how much energy goes into
+managing this balancing act.
+
It’s the absolute core of the project, and is what
+takes up most of our time and energy.
+
Feeding the Queue
+
We have to feed the queue.
+
Control of our supply chain
+
+
A supply chain, .. is a complex logistics system that consists of
+facilities that convert raw materials into finished products and
+distribute them to end consumers or end customers. Meanwhile, supply
+chain management deals with the flow of goods within the supply chain in
+the most efficient manner. From Wikipedia, the
+free encyclopedia
+
+
Our supply chain is entirely dictated by the generous hosts who
+donate their time to recording a show. Therefore the Janitors have no
+control over when shows are sent in. As Janitors, we can only contact
+the community to remind them to send in shows.
+
We need your help to manage this.
+
Boom/Bust Supply
+
Usually there is a burst of contributions after a “Call for Shows”,
+which is itself as a result of a lull in the amount of contributions.
+This leads to boom and bust/saw tooth delivery of shows. There is a
+painful behaviour that the Janitors observe after a “Call for
+Shows”.
+
+
There is a burst of contributions all taking the first available
+slots.
+
The queue quickly fills up the upcoming weeks.
+
It takes time for the “Call for Shows” to get to everyone.
+
A potential host is late hearing the “Call for Shows” and sees a
+full queue, resulting in them not submitting a show.
+
Worse is that it instills the feeling of HPR “Crying
+Wolf”, incorrectly assuming the subsequent “Call for Shows” can
+safely be ignored.
+
After a few weeks our queue is empty and we need to put out another
+“Call for Shows”, which that host ignores.
+
+
The timely delivery of shows is an inherent challenge with volunteer
+contributions. Fortunately this is a well understood problem known as Queueing
+theory, and we have implemented the Reserve
+Queue, as a means to regulate/buffer the incoming
+delivery with outgoing supply.
+
+
The reserve queue is intended only to be used in the cases where
+there is still a gap in the schedule one week prior to release. This was
+known as the emergency queue, but now can also be used when the hosts
+don’t care when the shows are scheduled. They will be used on a first in
+first out basis, when there is no conflict with the scheduling
+guidelines. These shows contain a message alerting listeners to the fact
+that we had free slots that were not filled.
+
+
Scheduling Guidelines
+
When you are contributing a show, you decide when to post your show.
+The choice of slot may even encourage others to submit a show
+themselves.
+
Our observations show that there is a Goldilock
+Zone where there are just the right amount of free slots to
+encourage contributions.
+
Too Many free slots
+
When there are too many free slots some people get disheartened and
+don’t want to contribute to a dying project.
+
On the other hand too many free slots can send regular hosts into a
+panic to fill them. We all suffer from this, and it can lead to burnout.
+Fortunately we now have the Reserve
+Queue, where they can post their backup shows at a time that is
+convenient to them.
+
The idea that some shows are sub par because they are rushed in to
+fill free slots can now be put to rest. All the shows in the Reserve
+Queue are there because the Host did not feel the need to rush the
+shows out.
+
Too many free slots
+
On the other hand seeing too many free slots some people get
+disheartened that their show won’t be aired for weeks, so end up not
+recording a show in the first place.
+
Hacking Human Behaviour
+
So the HPR Community can influence the supply chain by been smart
+about how we schedule the shows.
You must have your audio recording ready to upload before you pick a
+slot.
+
New hosts, Interviews, and other time critical shows should use the
+first free slot.
+
Always try and fill any free slots that are available in the
+upcoming two weeks.
+
When the queue is filling up then leave some slots free for new
+contributors.
+
Post non urgent shows into the first empty week.
+
If you are uploading a series of shows then post them one every two
+weeks.
+
If you have a non urgent show that is timeless, then add every
+second show to the Reserve
+Queue.
+
+
This way your (person hearing this) actions, give
+the HPR Community complete control over the supply of shows in a general
+sense.
+
Remember the HPR Community have not missed a day since September
+2009.
+
Janitors Covenant
+
The Janitors will continue to process and post the shows, so long as
+you, the HPR community, continue to send them in. The
+Janitors Covenant is to continue to produce shows as long as people send
+them in. If people stop sending them in we will shut the project down
+with grace and a big send off.
+
The order of the Mop
+
Before we go any further we should give a nod to the people who give
+up their free time to keep the shows pumping.
Getting our podcast distributed is no problem what so ever.
+
While NPR, BBC, Netflix, Disney+, etc can afford to record unique
+content, unique content is very, very expensive. Amazon, Apple and
+Spotify may have the resources to do this, but they and others, make use
+of the freely available content to inflate their inventory of
+content.
We have no control over what they do with the feed, how often the use
+it, if they cache it, if they use the images from it, if the show the
+explicit tag, or as in this case, if they display the
+host or not.
+
That’s why you can help by taking up the mop and becoming the Janitor
+for your Distribution Channel.
+
+
+
+
+
diff --git a/eps/hpr4243/hpr4243_1.JPG b/eps/hpr4243/hpr4243_1.JPG
new file mode 100755
index 0000000..6e214c0
Binary files /dev/null and b/eps/hpr4243/hpr4243_1.JPG differ
diff --git a/eps/hpr4243/hpr4243_2.JPG b/eps/hpr4243/hpr4243_2.JPG
new file mode 100755
index 0000000..b49888b
Binary files /dev/null and b/eps/hpr4243/hpr4243_2.JPG differ
diff --git a/eps/hpr4243/hpr4243_3.JPG b/eps/hpr4243/hpr4243_3.JPG
new file mode 100755
index 0000000..6330112
Binary files /dev/null and b/eps/hpr4243/hpr4243_3.JPG differ
diff --git a/eps/hpr4251/hpr4251_image_1.png b/eps/hpr4251/hpr4251_image_1.png
new file mode 100755
index 0000000..a7c1dc0
Binary files /dev/null and b/eps/hpr4251/hpr4251_image_1.png differ
diff --git a/eps/hpr4263/hpr4263_image_1.jpg b/eps/hpr4263/hpr4263_image_1.jpg
new file mode 100755
index 0000000..36181a0
Binary files /dev/null and b/eps/hpr4263/hpr4263_image_1.jpg differ
diff --git a/eps/hpr4263/hpr4263_image_1_tn.jpg b/eps/hpr4263/hpr4263_image_1_tn.jpg
new file mode 100755
index 0000000..cdb0f4a
Binary files /dev/null and b/eps/hpr4263/hpr4263_image_1_tn.jpg differ
diff --git a/eps/hpr4274/hpr4274_image_1.png b/eps/hpr4274/hpr4274_image_1.png
new file mode 100755
index 0000000..02571e6
Binary files /dev/null and b/eps/hpr4274/hpr4274_image_1.png differ
diff --git a/eps/hpr4274/hpr4274_image_2.png b/eps/hpr4274/hpr4274_image_2.png
new file mode 100755
index 0000000..22a357e
Binary files /dev/null and b/eps/hpr4274/hpr4274_image_2.png differ
diff --git a/eps/hpr4274/hpr4274_image_3.png b/eps/hpr4274/hpr4274_image_3.png
new file mode 100755
index 0000000..f8481b6
Binary files /dev/null and b/eps/hpr4274/hpr4274_image_3.png differ
diff --git a/eps/hpr4282/hpr4282_image_1.jpeg b/eps/hpr4282/hpr4282_image_1.jpeg
new file mode 100755
index 0000000..6979bda
Binary files /dev/null and b/eps/hpr4282/hpr4282_image_1.jpeg differ
diff --git a/eps/hpr4282/hpr4282_image_1_tn.jpeg b/eps/hpr4282/hpr4282_image_1_tn.jpeg
new file mode 100755
index 0000000..8531cf4
Binary files /dev/null and b/eps/hpr4282/hpr4282_image_1_tn.jpeg differ
diff --git a/eps/hpr4283/hpr4283_image_1.jpeg b/eps/hpr4283/hpr4283_image_1.jpeg
new file mode 100755
index 0000000..47bd609
Binary files /dev/null and b/eps/hpr4283/hpr4283_image_1.jpeg differ
diff --git a/eps/hpr4283/hpr4283_image_1_tn.jpeg b/eps/hpr4283/hpr4283_image_1_tn.jpeg
new file mode 100755
index 0000000..f4bc548
Binary files /dev/null and b/eps/hpr4283/hpr4283_image_1_tn.jpeg differ
diff --git a/eps/hpr4283/hpr4283_image_2.jpeg b/eps/hpr4283/hpr4283_image_2.jpeg
new file mode 100755
index 0000000..17335d3
Binary files /dev/null and b/eps/hpr4283/hpr4283_image_2.jpeg differ
diff --git a/eps/hpr4283/hpr4283_image_2_tn.jpeg b/eps/hpr4283/hpr4283_image_2_tn.jpeg
new file mode 100755
index 0000000..8e50e2c
Binary files /dev/null and b/eps/hpr4283/hpr4283_image_2_tn.jpeg differ
diff --git a/eps/hpr4283/hpr4283_image_3.jpeg b/eps/hpr4283/hpr4283_image_3.jpeg
new file mode 100755
index 0000000..ca97c32
Binary files /dev/null and b/eps/hpr4283/hpr4283_image_3.jpeg differ
diff --git a/eps/hpr4283/hpr4283_image_3_tn.jpeg b/eps/hpr4283/hpr4283_image_3_tn.jpeg
new file mode 100755
index 0000000..9ef2c6f
Binary files /dev/null and b/eps/hpr4283/hpr4283_image_3_tn.jpeg differ
diff --git a/eps/hpr4301/hpr4301_image_1.jpeg b/eps/hpr4301/hpr4301_image_1.jpeg
new file mode 100755
index 0000000..3f2e436
Binary files /dev/null and b/eps/hpr4301/hpr4301_image_1.jpeg differ
diff --git a/eps/hpr4301/hpr4301_image_1_tn.jpeg b/eps/hpr4301/hpr4301_image_1_tn.jpeg
new file mode 100755
index 0000000..dd11ce0
Binary files /dev/null and b/eps/hpr4301/hpr4301_image_1_tn.jpeg differ
diff --git a/eps/hpr4301/hpr4301_image_ext_1.jpeg b/eps/hpr4301/hpr4301_image_ext_1.jpeg
new file mode 100755
index 0000000..668683a
Binary files /dev/null and b/eps/hpr4301/hpr4301_image_ext_1.jpeg differ
diff --git a/eps/hpr4301/hpr4301_image_ext_1_tn.jpeg b/eps/hpr4301/hpr4301_image_ext_1_tn.jpeg
new file mode 100755
index 0000000..9ad12d0
Binary files /dev/null and b/eps/hpr4301/hpr4301_image_ext_1_tn.jpeg differ
diff --git a/eps/hpr4301/hpr4301_image_ext_2.png b/eps/hpr4301/hpr4301_image_ext_2.png
new file mode 100755
index 0000000..300d933
Binary files /dev/null and b/eps/hpr4301/hpr4301_image_ext_2.png differ
diff --git a/eps/hpr4301/hpr4301_image_ext_2_tn.png b/eps/hpr4301/hpr4301_image_ext_2_tn.png
new file mode 100755
index 0000000..b345d94
Binary files /dev/null and b/eps/hpr4301/hpr4301_image_ext_2_tn.png differ
diff --git a/eps/hpr4301/hpr4301_image_ext_3.png b/eps/hpr4301/hpr4301_image_ext_3.png
new file mode 100755
index 0000000..81314b8
Binary files /dev/null and b/eps/hpr4301/hpr4301_image_ext_3.png differ
diff --git a/eps/hpr4301/hpr4301_image_ext_3_tn.png b/eps/hpr4301/hpr4301_image_ext_3_tn.png
new file mode 100755
index 0000000..2d9b6c4
Binary files /dev/null and b/eps/hpr4301/hpr4301_image_ext_3_tn.png differ
diff --git a/eps/hpr4301/hpr4301_image_ext_4.png b/eps/hpr4301/hpr4301_image_ext_4.png
new file mode 100755
index 0000000..f4023f2
Binary files /dev/null and b/eps/hpr4301/hpr4301_image_ext_4.png differ
diff --git a/eps/hpr4301/hpr4301_image_ext_4_tn.png b/eps/hpr4301/hpr4301_image_ext_4_tn.png
new file mode 100755
index 0000000..c9a482b
Binary files /dev/null and b/eps/hpr4301/hpr4301_image_ext_4_tn.png differ
diff --git a/eps/hpr4301/hpr4301_image_ext_5.jpeg b/eps/hpr4301/hpr4301_image_ext_5.jpeg
new file mode 100755
index 0000000..94e213c
Binary files /dev/null and b/eps/hpr4301/hpr4301_image_ext_5.jpeg differ
diff --git a/eps/hpr4301/hpr4301_image_ext_5_tn.jpeg b/eps/hpr4301/hpr4301_image_ext_5_tn.jpeg
new file mode 100755
index 0000000..5936b27
Binary files /dev/null and b/eps/hpr4301/hpr4301_image_ext_5_tn.jpeg differ
diff --git a/eps/hpr4311/hpr4311_image_1 b/eps/hpr4311/hpr4311_image_1
new file mode 100755
index 0000000..f97a27e
Binary files /dev/null and b/eps/hpr4311/hpr4311_image_1 differ
diff --git a/eps/hpr4311/hpr4311_image_1.png b/eps/hpr4311/hpr4311_image_1.png
new file mode 100755
index 0000000..f97a27e
Binary files /dev/null and b/eps/hpr4311/hpr4311_image_1.png differ
diff --git a/eps/hpr4311/hpr4311_image_1_tn.png b/eps/hpr4311/hpr4311_image_1_tn.png
new file mode 100755
index 0000000..29ad501
Binary files /dev/null and b/eps/hpr4311/hpr4311_image_1_tn.png differ
diff --git a/eps/hpr4311/hpr4311_image_2 b/eps/hpr4311/hpr4311_image_2
new file mode 100755
index 0000000..6d19167
Binary files /dev/null and b/eps/hpr4311/hpr4311_image_2 differ
diff --git a/eps/hpr4311/hpr4311_image_2.png b/eps/hpr4311/hpr4311_image_2.png
new file mode 100755
index 0000000..6d19167
Binary files /dev/null and b/eps/hpr4311/hpr4311_image_2.png differ
diff --git a/eps/hpr4311/hpr4311_image_2_tn.png b/eps/hpr4311/hpr4311_image_2_tn.png
new file mode 100755
index 0000000..c6b10dc
Binary files /dev/null and b/eps/hpr4311/hpr4311_image_2_tn.png differ
diff --git a/eps/hpr4313/hpr4313_image_1.jpeg b/eps/hpr4313/hpr4313_image_1.jpeg
new file mode 100755
index 0000000..7036cbb
Binary files /dev/null and b/eps/hpr4313/hpr4313_image_1.jpeg differ
diff --git a/eps/hpr4313/hpr4313_image_1_tn.jpeg b/eps/hpr4313/hpr4313_image_1_tn.jpeg
new file mode 100755
index 0000000..62eb0e3
Binary files /dev/null and b/eps/hpr4313/hpr4313_image_1_tn.jpeg differ
diff --git a/eps/hpr4317/hpr4317_image_1.png b/eps/hpr4317/hpr4317_image_1.png
new file mode 100755
index 0000000..b5a2f39
Binary files /dev/null and b/eps/hpr4317/hpr4317_image_1.png differ
diff --git a/eps/hpr4317/hpr4317_image_2.png b/eps/hpr4317/hpr4317_image_2.png
new file mode 100755
index 0000000..e563e1a
Binary files /dev/null and b/eps/hpr4317/hpr4317_image_2.png differ
diff --git a/eps/hpr4330/hpr4330_image_1.webp b/eps/hpr4330/hpr4330_image_1.webp
new file mode 100755
index 0000000..47221b5
Binary files /dev/null and b/eps/hpr4330/hpr4330_image_1.webp differ
diff --git a/eps/hpr4330/hpr4330_image_1_tn.webp b/eps/hpr4330/hpr4330_image_1_tn.webp
new file mode 100755
index 0000000..b4243f5
Binary files /dev/null and b/eps/hpr4330/hpr4330_image_1_tn.webp differ
diff --git a/eps/hpr4331/hpr4331_image_ext_1.png b/eps/hpr4331/hpr4331_image_ext_1.png
new file mode 100755
index 0000000..a2a7257
Binary files /dev/null and b/eps/hpr4331/hpr4331_image_ext_1.png differ
diff --git a/eps/hpr4331/hpr4331_image_ext_1_tn.png b/eps/hpr4331/hpr4331_image_ext_1_tn.png
new file mode 100755
index 0000000..1890b7b
Binary files /dev/null and b/eps/hpr4331/hpr4331_image_ext_1_tn.png differ
diff --git a/eps/hpr4331/hpr4331_image_ext_2.png b/eps/hpr4331/hpr4331_image_ext_2.png
new file mode 100755
index 0000000..3871bb5
Binary files /dev/null and b/eps/hpr4331/hpr4331_image_ext_2.png differ
diff --git a/eps/hpr4331/hpr4331_image_ext_2_tn.png b/eps/hpr4331/hpr4331_image_ext_2_tn.png
new file mode 100755
index 0000000..67bde1f
Binary files /dev/null and b/eps/hpr4331/hpr4331_image_ext_2_tn.png differ
diff --git a/eps/hpr4336/hpr4336_image_1.jpeg b/eps/hpr4336/hpr4336_image_1.jpeg
new file mode 100755
index 0000000..5722b9c
Binary files /dev/null and b/eps/hpr4336/hpr4336_image_1.jpeg differ
diff --git a/eps/hpr4336/hpr4336_image_1_tn.jpeg b/eps/hpr4336/hpr4336_image_1_tn.jpeg
new file mode 100755
index 0000000..70d479d
Binary files /dev/null and b/eps/hpr4336/hpr4336_image_1_tn.jpeg differ
diff --git a/eps/hpr4336/hpr4336_image_2.jpeg b/eps/hpr4336/hpr4336_image_2.jpeg
new file mode 100755
index 0000000..240d965
Binary files /dev/null and b/eps/hpr4336/hpr4336_image_2.jpeg differ
diff --git a/eps/hpr4336/hpr4336_image_2_tn.jpeg b/eps/hpr4336/hpr4336_image_2_tn.jpeg
new file mode 100755
index 0000000..83596a8
Binary files /dev/null and b/eps/hpr4336/hpr4336_image_2_tn.jpeg differ
diff --git a/eps/hpr4336/hpr4336_image_3.jpeg b/eps/hpr4336/hpr4336_image_3.jpeg
new file mode 100755
index 0000000..76ee513
Binary files /dev/null and b/eps/hpr4336/hpr4336_image_3.jpeg differ
diff --git a/eps/hpr4336/hpr4336_image_3_tn.jpeg b/eps/hpr4336/hpr4336_image_3_tn.jpeg
new file mode 100755
index 0000000..59e3ef7
Binary files /dev/null and b/eps/hpr4336/hpr4336_image_3_tn.jpeg differ
diff --git a/eps/hpr4336/hpr4336_image_4.png b/eps/hpr4336/hpr4336_image_4.png
new file mode 100755
index 0000000..b6dc214
Binary files /dev/null and b/eps/hpr4336/hpr4336_image_4.png differ
diff --git a/eps/hpr4336/hpr4336_image_4_tn.png b/eps/hpr4336/hpr4336_image_4_tn.png
new file mode 100755
index 0000000..270faa2
Binary files /dev/null and b/eps/hpr4336/hpr4336_image_4_tn.png differ
diff --git a/eps/hpr4337/hpr4337_image_ext_1.png b/eps/hpr4337/hpr4337_image_ext_1.png
new file mode 100755
index 0000000..d522762
Binary files /dev/null and b/eps/hpr4337/hpr4337_image_ext_1.png differ
diff --git a/eps/hpr4337/hpr4337_image_ext_1_tn.png b/eps/hpr4337/hpr4337_image_ext_1_tn.png
new file mode 100755
index 0000000..cd62528
Binary files /dev/null and b/eps/hpr4337/hpr4337_image_ext_1_tn.png differ
diff --git a/eps/hpr4337/hpr4337_image_ext_2.png b/eps/hpr4337/hpr4337_image_ext_2.png
new file mode 100755
index 0000000..30878b1
Binary files /dev/null and b/eps/hpr4337/hpr4337_image_ext_2.png differ
diff --git a/eps/hpr4337/hpr4337_image_ext_2_tn.png b/eps/hpr4337/hpr4337_image_ext_2_tn.png
new file mode 100755
index 0000000..3471ea0
Binary files /dev/null and b/eps/hpr4337/hpr4337_image_ext_2_tn.png differ
diff --git a/eps/hpr4337/hpr4337_image_ext_3.png b/eps/hpr4337/hpr4337_image_ext_3.png
new file mode 100755
index 0000000..a6ccb3c
Binary files /dev/null and b/eps/hpr4337/hpr4337_image_ext_3.png differ
diff --git a/eps/hpr4337/hpr4337_image_ext_4.png b/eps/hpr4337/hpr4337_image_ext_4.png
new file mode 100755
index 0000000..b93de8e
Binary files /dev/null and b/eps/hpr4337/hpr4337_image_ext_4.png differ
diff --git a/eps/hpr4337/hpr4337_image_ext_4_tn.png b/eps/hpr4337/hpr4337_image_ext_4_tn.png
new file mode 100755
index 0000000..f61f44e
Binary files /dev/null and b/eps/hpr4337/hpr4337_image_ext_4_tn.png differ
diff --git a/eps/hpr4337/hpr4337_image_ext_5.png b/eps/hpr4337/hpr4337_image_ext_5.png
new file mode 100755
index 0000000..5f777ba
Binary files /dev/null and b/eps/hpr4337/hpr4337_image_ext_5.png differ
diff --git a/eps/hpr4337/hpr4337_image_ext_5_tn.png b/eps/hpr4337/hpr4337_image_ext_5_tn.png
new file mode 100755
index 0000000..18b90d4
Binary files /dev/null and b/eps/hpr4337/hpr4337_image_ext_5_tn.png differ
diff --git a/eps/hpr4337/hpr4337_image_ext_6.png b/eps/hpr4337/hpr4337_image_ext_6.png
new file mode 100755
index 0000000..8f2bf96
Binary files /dev/null and b/eps/hpr4337/hpr4337_image_ext_6.png differ
diff --git a/eps/hpr4337/hpr4337_image_ext_6_tn.png b/eps/hpr4337/hpr4337_image_ext_6_tn.png
new file mode 100755
index 0000000..0e41a70
Binary files /dev/null and b/eps/hpr4337/hpr4337_image_ext_6_tn.png differ
diff --git a/eps/hpr4337/hpr4337_image_ext_7.png b/eps/hpr4337/hpr4337_image_ext_7.png
new file mode 100755
index 0000000..38d673f
Binary files /dev/null and b/eps/hpr4337/hpr4337_image_ext_7.png differ
diff --git a/eps/hpr4337/hpr4337_image_ext_8.png b/eps/hpr4337/hpr4337_image_ext_8.png
new file mode 100755
index 0000000..3caea9f
Binary files /dev/null and b/eps/hpr4337/hpr4337_image_ext_8.png differ
diff --git a/eps/hpr4337/hpr4337_image_ext_8_tn.png b/eps/hpr4337/hpr4337_image_ext_8_tn.png
new file mode 100755
index 0000000..a6a0612
Binary files /dev/null and b/eps/hpr4337/hpr4337_image_ext_8_tn.png differ
diff --git a/eps/hpr4348/hpr4348_image_1.jpeg b/eps/hpr4348/hpr4348_image_1.jpeg
new file mode 100755
index 0000000..5501ad0
Binary files /dev/null and b/eps/hpr4348/hpr4348_image_1.jpeg differ
diff --git a/eps/hpr4348/hpr4348_image_1_tn.jpeg b/eps/hpr4348/hpr4348_image_1_tn.jpeg
new file mode 100755
index 0000000..e7288fe
Binary files /dev/null and b/eps/hpr4348/hpr4348_image_1_tn.jpeg differ
diff --git a/eps/hpr4355/hpr4355_image_1.png b/eps/hpr4355/hpr4355_image_1.png
new file mode 100755
index 0000000..f92051c
Binary files /dev/null and b/eps/hpr4355/hpr4355_image_1.png differ
diff --git a/eps/hpr4355/hpr4355_image_1_tn.png b/eps/hpr4355/hpr4355_image_1_tn.png
new file mode 100755
index 0000000..07f8d56
Binary files /dev/null and b/eps/hpr4355/hpr4355_image_1_tn.png differ
diff --git a/eps/hpr4375/hpr4375_image_ext_1.jpeg b/eps/hpr4375/hpr4375_image_ext_1.jpeg
new file mode 100755
index 0000000..600bc54
Binary files /dev/null and b/eps/hpr4375/hpr4375_image_ext_1.jpeg differ
diff --git a/eps/hpr4375/hpr4375_image_ext_1_tn.jpeg b/eps/hpr4375/hpr4375_image_ext_1_tn.jpeg
new file mode 100755
index 0000000..23fd2fb
Binary files /dev/null and b/eps/hpr4375/hpr4375_image_ext_1_tn.jpeg differ
diff --git a/eps/hpr4375/hpr4375_image_ext_2.jpeg b/eps/hpr4375/hpr4375_image_ext_2.jpeg
new file mode 100755
index 0000000..7c7bd92
Binary files /dev/null and b/eps/hpr4375/hpr4375_image_ext_2.jpeg differ
diff --git a/eps/hpr4375/hpr4375_image_ext_2_tn.jpeg b/eps/hpr4375/hpr4375_image_ext_2_tn.jpeg
new file mode 100755
index 0000000..f885b00
Binary files /dev/null and b/eps/hpr4375/hpr4375_image_ext_2_tn.jpeg differ
diff --git a/eps/hpr4375/hpr4375_image_ext_3.jpeg b/eps/hpr4375/hpr4375_image_ext_3.jpeg
new file mode 100755
index 0000000..63a2e56
Binary files /dev/null and b/eps/hpr4375/hpr4375_image_ext_3.jpeg differ
diff --git a/eps/hpr4385/hpr4385_image_1.png b/eps/hpr4385/hpr4385_image_1.png
new file mode 100755
index 0000000..d09149a
Binary files /dev/null and b/eps/hpr4385/hpr4385_image_1.png differ
diff --git a/eps/hpr4385/hpr4385_image_1_tn.png b/eps/hpr4385/hpr4385_image_1_tn.png
new file mode 100755
index 0000000..d62b74e
Binary files /dev/null and b/eps/hpr4385/hpr4385_image_1_tn.png differ
diff --git a/eps/hpr4385/hpr4385_image_2.png b/eps/hpr4385/hpr4385_image_2.png
new file mode 100755
index 0000000..205a272
Binary files /dev/null and b/eps/hpr4385/hpr4385_image_2.png differ
diff --git a/eps/hpr4385/hpr4385_image_2_tn.png b/eps/hpr4385/hpr4385_image_2_tn.png
new file mode 100755
index 0000000..725b406
Binary files /dev/null and b/eps/hpr4385/hpr4385_image_2_tn.png differ
diff --git a/eps/hpr4397/hpr4397_image_1.png b/eps/hpr4397/hpr4397_image_1.png
new file mode 100755
index 0000000..5c6dd41
Binary files /dev/null and b/eps/hpr4397/hpr4397_image_1.png differ
diff --git a/eps/hpr4413/hpr4413_image_1.png b/eps/hpr4413/hpr4413_image_1.png
new file mode 100755
index 0000000..56b5ee2
Binary files /dev/null and b/eps/hpr4413/hpr4413_image_1.png differ
diff --git a/eps/hpr4413/hpr4413_image_1_tn.png b/eps/hpr4413/hpr4413_image_1_tn.png
new file mode 100755
index 0000000..8ce496e
Binary files /dev/null and b/eps/hpr4413/hpr4413_image_1_tn.png differ
diff --git a/eps/hpr4424/hpr4424_image_1.png b/eps/hpr4424/hpr4424_image_1.png
new file mode 100755
index 0000000..64556b3
Binary files /dev/null and b/eps/hpr4424/hpr4424_image_1.png differ
diff --git a/eps/hpr4424/hpr4424_image_1_tn.png b/eps/hpr4424/hpr4424_image_1_tn.png
new file mode 100755
index 0000000..203f022
Binary files /dev/null and b/eps/hpr4424/hpr4424_image_1_tn.png differ
diff --git a/eps/hpr4425/hpr4425_image_ext_1.webp b/eps/hpr4425/hpr4425_image_ext_1.webp
new file mode 100755
index 0000000..12669a1
Binary files /dev/null and b/eps/hpr4425/hpr4425_image_ext_1.webp differ
diff --git a/eps/hpr4434/hpr4434_image_1.png b/eps/hpr4434/hpr4434_image_1.png
new file mode 100755
index 0000000..3f7d3d2
Binary files /dev/null and b/eps/hpr4434/hpr4434_image_1.png differ
diff --git a/eps/hpr4447/hpr4447_VID_20250802_100139Z.mp4 b/eps/hpr4447/hpr4447_VID_20250802_100139Z.mp4
new file mode 100755
index 0000000..712def3
Binary files /dev/null and b/eps/hpr4447/hpr4447_VID_20250802_100139Z.mp4 differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_100136Z.jpg b/eps/hpr4447/hpr4447_image_20250802_100136Z.jpg
new file mode 100755
index 0000000..d4ee641
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_100136Z.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_100136Z_tn.jpg b/eps/hpr4447/hpr4447_image_20250802_100136Z_tn.jpg
new file mode 100755
index 0000000..4e2ff9e
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_100136Z_tn.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_100151Z.jpg b/eps/hpr4447/hpr4447_image_20250802_100151Z.jpg
new file mode 100755
index 0000000..d6ebd6e
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_100151Z.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_100151Z_tn.jpg b/eps/hpr4447/hpr4447_image_20250802_100151Z_tn.jpg
new file mode 100755
index 0000000..63dbb04
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_100151Z_tn.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_103924Z.jpg b/eps/hpr4447/hpr4447_image_20250802_103924Z.jpg
new file mode 100755
index 0000000..c5c3e13
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_103924Z.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_103924Z_tn.jpg b/eps/hpr4447/hpr4447_image_20250802_103924Z_tn.jpg
new file mode 100755
index 0000000..28081f6
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_103924Z_tn.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_103927Z.jpg b/eps/hpr4447/hpr4447_image_20250802_103927Z.jpg
new file mode 100755
index 0000000..715ce57
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_103927Z.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_103927Z_tn.jpg b/eps/hpr4447/hpr4447_image_20250802_103927Z_tn.jpg
new file mode 100755
index 0000000..c2d5ebb
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_103927Z_tn.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_103939Z.jpg b/eps/hpr4447/hpr4447_image_20250802_103939Z.jpg
new file mode 100755
index 0000000..75dcd8a
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_103939Z.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_103939Z_tn.jpg b/eps/hpr4447/hpr4447_image_20250802_103939Z_tn.jpg
new file mode 100755
index 0000000..9f98717
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_103939Z_tn.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_103955Z.jpg b/eps/hpr4447/hpr4447_image_20250802_103955Z.jpg
new file mode 100755
index 0000000..bc7db9a
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_103955Z.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_103955Z_tn.jpg b/eps/hpr4447/hpr4447_image_20250802_103955Z_tn.jpg
new file mode 100755
index 0000000..32b4443
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_103955Z_tn.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_104010Z.jpg b/eps/hpr4447/hpr4447_image_20250802_104010Z.jpg
new file mode 100755
index 0000000..471f885
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_104010Z.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_104010Z_tn.jpg b/eps/hpr4447/hpr4447_image_20250802_104010Z_tn.jpg
new file mode 100755
index 0000000..a303894
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_104010Z_tn.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_104023Z.jpg b/eps/hpr4447/hpr4447_image_20250802_104023Z.jpg
new file mode 100755
index 0000000..72fc69b
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_104023Z.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_104023Z_tn.jpg b/eps/hpr4447/hpr4447_image_20250802_104023Z_tn.jpg
new file mode 100755
index 0000000..a36543b
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_104023Z_tn.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_104028Z.jpg b/eps/hpr4447/hpr4447_image_20250802_104028Z.jpg
new file mode 100755
index 0000000..60417b5
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_104028Z.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_104028Z_tn.jpg b/eps/hpr4447/hpr4447_image_20250802_104028Z_tn.jpg
new file mode 100755
index 0000000..7f39c99
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_104028Z_tn.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_104032Z.jpg b/eps/hpr4447/hpr4447_image_20250802_104032Z.jpg
new file mode 100755
index 0000000..7c38bf5
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_104032Z.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_104032Z_tn.jpg b/eps/hpr4447/hpr4447_image_20250802_104032Z_tn.jpg
new file mode 100755
index 0000000..cd01975
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_104032Z_tn.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_104035Z.jpg b/eps/hpr4447/hpr4447_image_20250802_104035Z.jpg
new file mode 100755
index 0000000..18d360b
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_104035Z.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_104035Z_tn.jpg b/eps/hpr4447/hpr4447_image_20250802_104035Z_tn.jpg
new file mode 100755
index 0000000..a2d3998
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_104035Z_tn.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_104221Z.jpg b/eps/hpr4447/hpr4447_image_20250802_104221Z.jpg
new file mode 100755
index 0000000..00c1517
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_104221Z.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_104221Z_tn.jpg b/eps/hpr4447/hpr4447_image_20250802_104221Z_tn.jpg
new file mode 100755
index 0000000..01cbb22
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_104221Z_tn.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_104231Z.jpg b/eps/hpr4447/hpr4447_image_20250802_104231Z.jpg
new file mode 100755
index 0000000..dce0ba3
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_104231Z.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_104231Z_tn.jpg b/eps/hpr4447/hpr4447_image_20250802_104231Z_tn.jpg
new file mode 100755
index 0000000..2c1d7f7
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_104231Z_tn.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_104808Z.jpg b/eps/hpr4447/hpr4447_image_20250802_104808Z.jpg
new file mode 100755
index 0000000..b760499
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_104808Z.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_104808Z_tn.jpg b/eps/hpr4447/hpr4447_image_20250802_104808Z_tn.jpg
new file mode 100755
index 0000000..56780d0
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_104808Z_tn.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_104815Z.jpg b/eps/hpr4447/hpr4447_image_20250802_104815Z.jpg
new file mode 100755
index 0000000..f707389
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_104815Z.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_104815Z_tn.jpg b/eps/hpr4447/hpr4447_image_20250802_104815Z_tn.jpg
new file mode 100755
index 0000000..ec902fd
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_104815Z_tn.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_105222Z.jpg b/eps/hpr4447/hpr4447_image_20250802_105222Z.jpg
new file mode 100755
index 0000000..fdc712f
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_105222Z.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_105222Z_tn.jpg b/eps/hpr4447/hpr4447_image_20250802_105222Z_tn.jpg
new file mode 100755
index 0000000..39b0c4b
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_105222Z_tn.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_105319Z.jpg b/eps/hpr4447/hpr4447_image_20250802_105319Z.jpg
new file mode 100755
index 0000000..c399535
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_105319Z.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_105319Z_tn.jpg b/eps/hpr4447/hpr4447_image_20250802_105319Z_tn.jpg
new file mode 100755
index 0000000..ff6941b
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_105319Z_tn.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_105323Z.jpg b/eps/hpr4447/hpr4447_image_20250802_105323Z.jpg
new file mode 100755
index 0000000..521188b
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_105323Z.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_105323Z_tn.jpg b/eps/hpr4447/hpr4447_image_20250802_105323Z_tn.jpg
new file mode 100755
index 0000000..7be974c
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_105323Z_tn.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_105334Z.jpg b/eps/hpr4447/hpr4447_image_20250802_105334Z.jpg
new file mode 100755
index 0000000..2a2dc68
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_105334Z.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_105334Z_tn.jpg b/eps/hpr4447/hpr4447_image_20250802_105334Z_tn.jpg
new file mode 100755
index 0000000..82dee7e
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_105334Z_tn.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_105347Z.jpg b/eps/hpr4447/hpr4447_image_20250802_105347Z.jpg
new file mode 100755
index 0000000..681adf4
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_105347Z.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_105347Z_tn.jpg b/eps/hpr4447/hpr4447_image_20250802_105347Z_tn.jpg
new file mode 100755
index 0000000..ccace96
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_105347Z_tn.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_105353Z.jpg b/eps/hpr4447/hpr4447_image_20250802_105353Z.jpg
new file mode 100755
index 0000000..db07279
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_105353Z.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_105353Z_tn.jpg b/eps/hpr4447/hpr4447_image_20250802_105353Z_tn.jpg
new file mode 100755
index 0000000..4fc923a
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_105353Z_tn.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_105434Z.jpg b/eps/hpr4447/hpr4447_image_20250802_105434Z.jpg
new file mode 100755
index 0000000..c34ee04
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_105434Z.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_105434Z_tn.jpg b/eps/hpr4447/hpr4447_image_20250802_105434Z_tn.jpg
new file mode 100755
index 0000000..e78cb24
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_105434Z_tn.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_105444Z.jpg b/eps/hpr4447/hpr4447_image_20250802_105444Z.jpg
new file mode 100755
index 0000000..2f5e78f
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_105444Z.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_105444Z_tn.jpg b/eps/hpr4447/hpr4447_image_20250802_105444Z_tn.jpg
new file mode 100755
index 0000000..9cef537
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_105444Z_tn.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_105454Z.jpg b/eps/hpr4447/hpr4447_image_20250802_105454Z.jpg
new file mode 100755
index 0000000..06f4125
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_105454Z.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_105454Z_tn.jpg b/eps/hpr4447/hpr4447_image_20250802_105454Z_tn.jpg
new file mode 100755
index 0000000..c5fc984
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_105454Z_tn.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_105458Z.jpg b/eps/hpr4447/hpr4447_image_20250802_105458Z.jpg
new file mode 100755
index 0000000..3c9b1d4
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_105458Z.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_105458Z_tn.jpg b/eps/hpr4447/hpr4447_image_20250802_105458Z_tn.jpg
new file mode 100755
index 0000000..7e72704
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_105458Z_tn.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_105513Z.jpg b/eps/hpr4447/hpr4447_image_20250802_105513Z.jpg
new file mode 100755
index 0000000..04d142b
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_105513Z.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_105513Z_tn.jpg b/eps/hpr4447/hpr4447_image_20250802_105513Z_tn.jpg
new file mode 100755
index 0000000..c15e390
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_105513Z_tn.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_120601Z.jpg b/eps/hpr4447/hpr4447_image_20250802_120601Z.jpg
new file mode 100755
index 0000000..ce2ddbd
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_120601Z.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_120601Z_tn.jpg b/eps/hpr4447/hpr4447_image_20250802_120601Z_tn.jpg
new file mode 100755
index 0000000..06cbcfa
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_120601Z_tn.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_120613Z.jpg b/eps/hpr4447/hpr4447_image_20250802_120613Z.jpg
new file mode 100755
index 0000000..b27f9db
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_120613Z.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_120613Z_tn.jpg b/eps/hpr4447/hpr4447_image_20250802_120613Z_tn.jpg
new file mode 100755
index 0000000..490473b
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_120613Z_tn.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_120649Z.jpg b/eps/hpr4447/hpr4447_image_20250802_120649Z.jpg
new file mode 100755
index 0000000..f46fe3e
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_120649Z.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_120649Z_tn.jpg b/eps/hpr4447/hpr4447_image_20250802_120649Z_tn.jpg
new file mode 100755
index 0000000..c7725ac
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_120649Z_tn.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_120653Z.jpg b/eps/hpr4447/hpr4447_image_20250802_120653Z.jpg
new file mode 100755
index 0000000..e87f6fe
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_120653Z.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_120653Z_tn.jpg b/eps/hpr4447/hpr4447_image_20250802_120653Z_tn.jpg
new file mode 100755
index 0000000..418ae0f
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_120653Z_tn.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_120908Z.jpg b/eps/hpr4447/hpr4447_image_20250802_120908Z.jpg
new file mode 100755
index 0000000..aab88d7
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_120908Z.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_120908Z_tn.jpg b/eps/hpr4447/hpr4447_image_20250802_120908Z_tn.jpg
new file mode 100755
index 0000000..00efb4c
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_120908Z_tn.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_121033Z.jpg b/eps/hpr4447/hpr4447_image_20250802_121033Z.jpg
new file mode 100755
index 0000000..ffa8a61
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_121033Z.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_121033Z_tn.jpg b/eps/hpr4447/hpr4447_image_20250802_121033Z_tn.jpg
new file mode 100755
index 0000000..cdbba7a
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_121033Z_tn.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_121039Z.jpg b/eps/hpr4447/hpr4447_image_20250802_121039Z.jpg
new file mode 100755
index 0000000..7241591
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_121039Z.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_121039Z_tn.jpg b/eps/hpr4447/hpr4447_image_20250802_121039Z_tn.jpg
new file mode 100755
index 0000000..984c38a
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_121039Z_tn.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_121238Z.jpg b/eps/hpr4447/hpr4447_image_20250802_121238Z.jpg
new file mode 100755
index 0000000..c852aec
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_121238Z.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_121238Z_tn.jpg b/eps/hpr4447/hpr4447_image_20250802_121238Z_tn.jpg
new file mode 100755
index 0000000..cfaef44
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_121238Z_tn.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_124157Z.jpg b/eps/hpr4447/hpr4447_image_20250802_124157Z.jpg
new file mode 100755
index 0000000..fc33eda
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_124157Z.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_124157Z_tn.jpg b/eps/hpr4447/hpr4447_image_20250802_124157Z_tn.jpg
new file mode 100755
index 0000000..353d644
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_124157Z_tn.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_124203Z.jpg b/eps/hpr4447/hpr4447_image_20250802_124203Z.jpg
new file mode 100755
index 0000000..a0706df
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_124203Z.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_124203Z_tn.jpg b/eps/hpr4447/hpr4447_image_20250802_124203Z_tn.jpg
new file mode 100755
index 0000000..4b45f77
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_124203Z_tn.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_124342Z.jpg b/eps/hpr4447/hpr4447_image_20250802_124342Z.jpg
new file mode 100755
index 0000000..b727177
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_124342Z.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_124342Z_tn.jpg b/eps/hpr4447/hpr4447_image_20250802_124342Z_tn.jpg
new file mode 100755
index 0000000..94459be
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_124342Z_tn.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_124646Z.jpg b/eps/hpr4447/hpr4447_image_20250802_124646Z.jpg
new file mode 100755
index 0000000..d6f2c38
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_124646Z.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_124646Z_tn.jpg b/eps/hpr4447/hpr4447_image_20250802_124646Z_tn.jpg
new file mode 100755
index 0000000..c965314
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_124646Z_tn.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_124653Z.jpg b/eps/hpr4447/hpr4447_image_20250802_124653Z.jpg
new file mode 100755
index 0000000..1d90790
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_124653Z.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_124653Z_tn.jpg b/eps/hpr4447/hpr4447_image_20250802_124653Z_tn.jpg
new file mode 100755
index 0000000..09e5678
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_124653Z_tn.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_124658Z.jpg b/eps/hpr4447/hpr4447_image_20250802_124658Z.jpg
new file mode 100755
index 0000000..c36d019
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_124658Z.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_124658Z_tn.jpg b/eps/hpr4447/hpr4447_image_20250802_124658Z_tn.jpg
new file mode 100755
index 0000000..1a85a74
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_124658Z_tn.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_125209Z.jpg b/eps/hpr4447/hpr4447_image_20250802_125209Z.jpg
new file mode 100755
index 0000000..f9c7041
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_125209Z.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_125209Z_tn.jpg b/eps/hpr4447/hpr4447_image_20250802_125209Z_tn.jpg
new file mode 100755
index 0000000..9bfa4ac
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_125209Z_tn.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_125214Z.jpg b/eps/hpr4447/hpr4447_image_20250802_125214Z.jpg
new file mode 100755
index 0000000..18e06fe
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_125214Z.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_125214Z_tn.jpg b/eps/hpr4447/hpr4447_image_20250802_125214Z_tn.jpg
new file mode 100755
index 0000000..7182583
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_125214Z_tn.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_125217Z.jpg b/eps/hpr4447/hpr4447_image_20250802_125217Z.jpg
new file mode 100755
index 0000000..f0997bc
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_125217Z.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_125217Z_tn.jpg b/eps/hpr4447/hpr4447_image_20250802_125217Z_tn.jpg
new file mode 100755
index 0000000..3828e61
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_125217Z_tn.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_125222Z.jpg b/eps/hpr4447/hpr4447_image_20250802_125222Z.jpg
new file mode 100755
index 0000000..8883790
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_125222Z.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_125222Z_tn.jpg b/eps/hpr4447/hpr4447_image_20250802_125222Z_tn.jpg
new file mode 100755
index 0000000..23eac09
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_125222Z_tn.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_125226Z.jpg b/eps/hpr4447/hpr4447_image_20250802_125226Z.jpg
new file mode 100755
index 0000000..aff7fb4
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_125226Z.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_125226Z_tn.jpg b/eps/hpr4447/hpr4447_image_20250802_125226Z_tn.jpg
new file mode 100755
index 0000000..1e3ecd1
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_125226Z_tn.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_125228Z.jpg b/eps/hpr4447/hpr4447_image_20250802_125228Z.jpg
new file mode 100755
index 0000000..d59ecb1
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_125228Z.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_125228Z_tn.jpg b/eps/hpr4447/hpr4447_image_20250802_125228Z_tn.jpg
new file mode 100755
index 0000000..8f26e77
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_125228Z_tn.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_125230Z.jpg b/eps/hpr4447/hpr4447_image_20250802_125230Z.jpg
new file mode 100755
index 0000000..e914dd5
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_125230Z.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_125230Z_tn.jpg b/eps/hpr4447/hpr4447_image_20250802_125230Z_tn.jpg
new file mode 100755
index 0000000..1a5eb6d
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_125230Z_tn.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_125233Z.jpg b/eps/hpr4447/hpr4447_image_20250802_125233Z.jpg
new file mode 100755
index 0000000..8ddf909
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_125233Z.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_125233Z_tn.jpg b/eps/hpr4447/hpr4447_image_20250802_125233Z_tn.jpg
new file mode 100755
index 0000000..5d25458
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_125233Z_tn.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_125235Z.jpg b/eps/hpr4447/hpr4447_image_20250802_125235Z.jpg
new file mode 100755
index 0000000..37e11d5
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_125235Z.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_125235Z_tn.jpg b/eps/hpr4447/hpr4447_image_20250802_125235Z_tn.jpg
new file mode 100755
index 0000000..b4c3a1f
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_125235Z_tn.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_125238Z.jpg b/eps/hpr4447/hpr4447_image_20250802_125238Z.jpg
new file mode 100755
index 0000000..e87dc98
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_125238Z.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_125238Z_tn.jpg b/eps/hpr4447/hpr4447_image_20250802_125238Z_tn.jpg
new file mode 100755
index 0000000..b147f33
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_125238Z_tn.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_125241Z.jpg b/eps/hpr4447/hpr4447_image_20250802_125241Z.jpg
new file mode 100755
index 0000000..b61099d
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_125241Z.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_125241Z_tn.jpg b/eps/hpr4447/hpr4447_image_20250802_125241Z_tn.jpg
new file mode 100755
index 0000000..4e05221
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_125241Z_tn.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_125243Z.jpg b/eps/hpr4447/hpr4447_image_20250802_125243Z.jpg
new file mode 100755
index 0000000..ca68ed2
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_125243Z.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_125243Z_tn.jpg b/eps/hpr4447/hpr4447_image_20250802_125243Z_tn.jpg
new file mode 100755
index 0000000..6041db6
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_125243Z_tn.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_125247Z.jpg b/eps/hpr4447/hpr4447_image_20250802_125247Z.jpg
new file mode 100755
index 0000000..2651c42
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_125247Z.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_125247Z_tn.jpg b/eps/hpr4447/hpr4447_image_20250802_125247Z_tn.jpg
new file mode 100755
index 0000000..e331da9
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_125247Z_tn.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_125252Z.jpg b/eps/hpr4447/hpr4447_image_20250802_125252Z.jpg
new file mode 100755
index 0000000..f297deb
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_125252Z.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_125252Z_tn.jpg b/eps/hpr4447/hpr4447_image_20250802_125252Z_tn.jpg
new file mode 100755
index 0000000..4ee9c79
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_125252Z_tn.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_125257Z.jpg b/eps/hpr4447/hpr4447_image_20250802_125257Z.jpg
new file mode 100755
index 0000000..18799fa
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_125257Z.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_125257Z_tn.jpg b/eps/hpr4447/hpr4447_image_20250802_125257Z_tn.jpg
new file mode 100755
index 0000000..9ed4801
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_125257Z_tn.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_125305Z.jpg b/eps/hpr4447/hpr4447_image_20250802_125305Z.jpg
new file mode 100755
index 0000000..13d4c1c
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_125305Z.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_125305Z_tn.jpg b/eps/hpr4447/hpr4447_image_20250802_125305Z_tn.jpg
new file mode 100755
index 0000000..ee4883d
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_125305Z_tn.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_125350Z.jpg b/eps/hpr4447/hpr4447_image_20250802_125350Z.jpg
new file mode 100755
index 0000000..d938bf0
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_125350Z.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_125350Z_tn.jpg b/eps/hpr4447/hpr4447_image_20250802_125350Z_tn.jpg
new file mode 100755
index 0000000..443c151
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_125350Z_tn.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_125352Z.jpg b/eps/hpr4447/hpr4447_image_20250802_125352Z.jpg
new file mode 100755
index 0000000..c341694
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_125352Z.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_125352Z_tn.jpg b/eps/hpr4447/hpr4447_image_20250802_125352Z_tn.jpg
new file mode 100755
index 0000000..98e8fc1
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_125352Z_tn.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_125354Z.jpg b/eps/hpr4447/hpr4447_image_20250802_125354Z.jpg
new file mode 100755
index 0000000..056ee44
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_125354Z.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_125354Z_tn.jpg b/eps/hpr4447/hpr4447_image_20250802_125354Z_tn.jpg
new file mode 100755
index 0000000..ca421b1
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_125354Z_tn.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_125404Z.jpg b/eps/hpr4447/hpr4447_image_20250802_125404Z.jpg
new file mode 100755
index 0000000..6c8639f
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_125404Z.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_125404Z_tn.jpg b/eps/hpr4447/hpr4447_image_20250802_125404Z_tn.jpg
new file mode 100755
index 0000000..7fcc6c4
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_125404Z_tn.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_125409Z.jpg b/eps/hpr4447/hpr4447_image_20250802_125409Z.jpg
new file mode 100755
index 0000000..7b58f2b
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_125409Z.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_125409Z_tn.jpg b/eps/hpr4447/hpr4447_image_20250802_125409Z_tn.jpg
new file mode 100755
index 0000000..cd160f9
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_125409Z_tn.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_125427Z.jpg b/eps/hpr4447/hpr4447_image_20250802_125427Z.jpg
new file mode 100755
index 0000000..cb2c49c
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_125427Z.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_125427Z_tn.jpg b/eps/hpr4447/hpr4447_image_20250802_125427Z_tn.jpg
new file mode 100755
index 0000000..5c69a96
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_125427Z_tn.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_125435Z.jpg b/eps/hpr4447/hpr4447_image_20250802_125435Z.jpg
new file mode 100755
index 0000000..17e274c
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_125435Z.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_125435Z_tn.jpg b/eps/hpr4447/hpr4447_image_20250802_125435Z_tn.jpg
new file mode 100755
index 0000000..0932e5d
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_125435Z_tn.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_125443Z.jpg b/eps/hpr4447/hpr4447_image_20250802_125443Z.jpg
new file mode 100755
index 0000000..8fa4f36
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_125443Z.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_125443Z_tn.jpg b/eps/hpr4447/hpr4447_image_20250802_125443Z_tn.jpg
new file mode 100755
index 0000000..2bf8680
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_125443Z_tn.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_125446Z.jpg b/eps/hpr4447/hpr4447_image_20250802_125446Z.jpg
new file mode 100755
index 0000000..783a9e3
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_125446Z.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_125446Z_tn.jpg b/eps/hpr4447/hpr4447_image_20250802_125446Z_tn.jpg
new file mode 100755
index 0000000..ee1c98e
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_125446Z_tn.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_125452Z.jpg b/eps/hpr4447/hpr4447_image_20250802_125452Z.jpg
new file mode 100755
index 0000000..a1a8768
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_125452Z.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_125452Z_tn.jpg b/eps/hpr4447/hpr4447_image_20250802_125452Z_tn.jpg
new file mode 100755
index 0000000..1b6bb36
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_125452Z_tn.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_125500Z.jpg b/eps/hpr4447/hpr4447_image_20250802_125500Z.jpg
new file mode 100755
index 0000000..1784c26
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_125500Z.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_125500Z_tn.jpg b/eps/hpr4447/hpr4447_image_20250802_125500Z_tn.jpg
new file mode 100755
index 0000000..7452ae9
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_125500Z_tn.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_125507Z.jpg b/eps/hpr4447/hpr4447_image_20250802_125507Z.jpg
new file mode 100755
index 0000000..2a3fdeb
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_125507Z.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_125507Z_tn.jpg b/eps/hpr4447/hpr4447_image_20250802_125507Z_tn.jpg
new file mode 100755
index 0000000..9d0612b
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_125507Z_tn.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_125510Z.jpg b/eps/hpr4447/hpr4447_image_20250802_125510Z.jpg
new file mode 100755
index 0000000..53f5789
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_125510Z.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_125510Z_tn.jpg b/eps/hpr4447/hpr4447_image_20250802_125510Z_tn.jpg
new file mode 100755
index 0000000..bcb56ea
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_125510Z_tn.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_125512Z.jpg b/eps/hpr4447/hpr4447_image_20250802_125512Z.jpg
new file mode 100755
index 0000000..dbe4949
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_125512Z.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_125512Z_tn.jpg b/eps/hpr4447/hpr4447_image_20250802_125512Z_tn.jpg
new file mode 100755
index 0000000..958db4f
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_125512Z_tn.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_125515Z.jpg b/eps/hpr4447/hpr4447_image_20250802_125515Z.jpg
new file mode 100755
index 0000000..ad38c28
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_125515Z.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_125515Z_tn.jpg b/eps/hpr4447/hpr4447_image_20250802_125515Z_tn.jpg
new file mode 100755
index 0000000..a48fbe7
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_125515Z_tn.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_125522Z.jpg b/eps/hpr4447/hpr4447_image_20250802_125522Z.jpg
new file mode 100755
index 0000000..b35cb13
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_125522Z.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_125522Z_tn.jpg b/eps/hpr4447/hpr4447_image_20250802_125522Z_tn.jpg
new file mode 100755
index 0000000..efff8fc
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_125522Z_tn.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_125530Z.jpg b/eps/hpr4447/hpr4447_image_20250802_125530Z.jpg
new file mode 100755
index 0000000..a87877e
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_125530Z.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_125530Z_tn.jpg b/eps/hpr4447/hpr4447_image_20250802_125530Z_tn.jpg
new file mode 100755
index 0000000..e870462
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_125530Z_tn.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_125541Z.jpg b/eps/hpr4447/hpr4447_image_20250802_125541Z.jpg
new file mode 100755
index 0000000..766a22f
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_125541Z.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_125541Z_tn.jpg b/eps/hpr4447/hpr4447_image_20250802_125541Z_tn.jpg
new file mode 100755
index 0000000..5321b91
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_125541Z_tn.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_125549Z.jpg b/eps/hpr4447/hpr4447_image_20250802_125549Z.jpg
new file mode 100755
index 0000000..50f964f
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_125549Z.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_125549Z_tn.jpg b/eps/hpr4447/hpr4447_image_20250802_125549Z_tn.jpg
new file mode 100755
index 0000000..1f64b14
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_125549Z_tn.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_125553Z.jpg b/eps/hpr4447/hpr4447_image_20250802_125553Z.jpg
new file mode 100755
index 0000000..373f577
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_125553Z.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_125553Z_tn.jpg b/eps/hpr4447/hpr4447_image_20250802_125553Z_tn.jpg
new file mode 100755
index 0000000..2de5f0f
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_125553Z_tn.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_125602Z.jpg b/eps/hpr4447/hpr4447_image_20250802_125602Z.jpg
new file mode 100755
index 0000000..a4c5d81
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_125602Z.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_125602Z_tn.jpg b/eps/hpr4447/hpr4447_image_20250802_125602Z_tn.jpg
new file mode 100755
index 0000000..3824c5a
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_125602Z_tn.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_125607Z.jpg b/eps/hpr4447/hpr4447_image_20250802_125607Z.jpg
new file mode 100755
index 0000000..1e43b60
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_125607Z.jpg differ
diff --git a/eps/hpr4447/hpr4447_image_20250802_125607Z_tn.jpg b/eps/hpr4447/hpr4447_image_20250802_125607Z_tn.jpg
new file mode 100755
index 0000000..7fac066
Binary files /dev/null and b/eps/hpr4447/hpr4447_image_20250802_125607Z_tn.jpg differ
diff --git a/eps/hpr4461/hpr4461_image_1.jpeg b/eps/hpr4461/hpr4461_image_1.jpeg
new file mode 100755
index 0000000..95c8218
Binary files /dev/null and b/eps/hpr4461/hpr4461_image_1.jpeg differ
diff --git a/eps/hpr4461/hpr4461_image_1_tn.jpeg b/eps/hpr4461/hpr4461_image_1_tn.jpeg
new file mode 100755
index 0000000..5894605
Binary files /dev/null and b/eps/hpr4461/hpr4461_image_1_tn.jpeg differ
diff --git a/eps/hpr4462/hpr4462_image_1.png b/eps/hpr4462/hpr4462_image_1.png
new file mode 100755
index 0000000..aa0c770
Binary files /dev/null and b/eps/hpr4462/hpr4462_image_1.png differ
diff --git a/eps/hpr4462/hpr4462_image_1_tn.png b/eps/hpr4462/hpr4462_image_1_tn.png
new file mode 100755
index 0000000..50c794b
Binary files /dev/null and b/eps/hpr4462/hpr4462_image_1_tn.png differ
diff --git a/eps/hpr4466/hpr4466_image_1.pdf b/eps/hpr4466/hpr4466_image_1.pdf
new file mode 100644
index 0000000..9885f9b
Binary files /dev/null and b/eps/hpr4466/hpr4466_image_1.pdf differ
diff --git a/eps/hpr4466/hpr4466_image_1_tn.png b/eps/hpr4466/hpr4466_image_1_tn.png
new file mode 100644
index 0000000..a6c6dd7
Binary files /dev/null and b/eps/hpr4466/hpr4466_image_1_tn.png differ
diff --git a/eps/hpr4471/hpr4471_image_1.png b/eps/hpr4471/hpr4471_image_1.png
new file mode 100755
index 0000000..6628af7
Binary files /dev/null and b/eps/hpr4471/hpr4471_image_1.png differ
diff --git a/eps/hpr4471/hpr4471_image_1_tn.png b/eps/hpr4471/hpr4471_image_1_tn.png
new file mode 100755
index 0000000..5a1c5da
Binary files /dev/null and b/eps/hpr4471/hpr4471_image_1_tn.png differ
diff --git a/eps/hpr4472/hpr4472_image_1.jpeg b/eps/hpr4472/hpr4472_image_1.jpeg
new file mode 100755
index 0000000..ca17515
Binary files /dev/null and b/eps/hpr4472/hpr4472_image_1.jpeg differ
diff --git a/eps/hpr4472/hpr4472_image_1_tn.jpeg b/eps/hpr4472/hpr4472_image_1_tn.jpeg
new file mode 100755
index 0000000..337f2e4
Binary files /dev/null and b/eps/hpr4472/hpr4472_image_1_tn.jpeg differ
diff --git a/eps/hpr4472/hpr4472_image_2.jpeg b/eps/hpr4472/hpr4472_image_2.jpeg
new file mode 100755
index 0000000..d968d1b
Binary files /dev/null and b/eps/hpr4472/hpr4472_image_2.jpeg differ
diff --git a/eps/hpr4472/hpr4472_image_2_tn.jpeg b/eps/hpr4472/hpr4472_image_2_tn.jpeg
new file mode 100755
index 0000000..3d5323a
Binary files /dev/null and b/eps/hpr4472/hpr4472_image_2_tn.jpeg differ
diff --git a/eps/hpr4473/hpr4473_image_1.png b/eps/hpr4473/hpr4473_image_1.png
new file mode 100755
index 0000000..6d6cc53
Binary files /dev/null and b/eps/hpr4473/hpr4473_image_1.png differ
diff --git a/eps/hpr4473/hpr4473_image_1_tn.png b/eps/hpr4473/hpr4473_image_1_tn.png
new file mode 100755
index 0000000..8d7aca3
Binary files /dev/null and b/eps/hpr4473/hpr4473_image_1_tn.png differ
diff --git a/eps/hpr4475/hpr4475_image_1.pdf b/eps/hpr4475/hpr4475_image_1.pdf
new file mode 100644
index 0000000..9885f9b
Binary files /dev/null and b/eps/hpr4475/hpr4475_image_1.pdf differ
diff --git a/eps/hpr4475/hpr4475_image_1_tn.png b/eps/hpr4475/hpr4475_image_1_tn.png
new file mode 100644
index 0000000..a6c6dd7
Binary files /dev/null and b/eps/hpr4475/hpr4475_image_1_tn.png differ
diff --git a/eps/hpr4483/hpr4483_image_1.png b/eps/hpr4483/hpr4483_image_1.png
new file mode 100644
index 0000000..1fb4ab5
Binary files /dev/null and b/eps/hpr4483/hpr4483_image_1.png differ
diff --git a/eps/hpr4483/hpr4483_image_1_tn.png b/eps/hpr4483/hpr4483_image_1_tn.png
new file mode 100644
index 0000000..279afe4
Binary files /dev/null and b/eps/hpr4483/hpr4483_image_1_tn.png differ
diff --git a/eps/hpr4488/hpr4488_image_1.jpeg b/eps/hpr4488/hpr4488_image_1.jpeg
new file mode 100755
index 0000000..75b19d7
Binary files /dev/null and b/eps/hpr4488/hpr4488_image_1.jpeg differ
diff --git a/eps/hpr4488/hpr4488_image_1_tn.jpeg b/eps/hpr4488/hpr4488_image_1_tn.jpeg
new file mode 100755
index 0000000..076c2fc
Binary files /dev/null and b/eps/hpr4488/hpr4488_image_1_tn.jpeg differ
diff --git a/eps/hpr4488/hpr4488_image_2.jpeg b/eps/hpr4488/hpr4488_image_2.jpeg
new file mode 100755
index 0000000..9cf6c22
Binary files /dev/null and b/eps/hpr4488/hpr4488_image_2.jpeg differ
diff --git a/eps/hpr4488/hpr4488_image_2_tn.jpeg b/eps/hpr4488/hpr4488_image_2_tn.jpeg
new file mode 100755
index 0000000..a4a781c
Binary files /dev/null and b/eps/hpr4488/hpr4488_image_2_tn.jpeg differ
diff --git a/eps/hpr4491/hpr4491_image_ext_1.jpeg b/eps/hpr4491/hpr4491_image_ext_1.jpeg
new file mode 100644
index 0000000..42d2043
Binary files /dev/null and b/eps/hpr4491/hpr4491_image_ext_1.jpeg differ
diff --git a/eps/hpr4491/hpr4491_image_ext_1_tn.jpeg b/eps/hpr4491/hpr4491_image_ext_1_tn.jpeg
new file mode 100644
index 0000000..050cdd1
Binary files /dev/null and b/eps/hpr4491/hpr4491_image_ext_1_tn.jpeg differ
diff --git a/eps/hpr4491/hpr4491_image_ext_2.jpeg b/eps/hpr4491/hpr4491_image_ext_2.jpeg
new file mode 100644
index 0000000..e2797c3
Binary files /dev/null and b/eps/hpr4491/hpr4491_image_ext_2.jpeg differ
diff --git a/eps/hpr4491/hpr4491_image_ext_2_tn.jpeg b/eps/hpr4491/hpr4491_image_ext_2_tn.jpeg
new file mode 100644
index 0000000..54f1c62
Binary files /dev/null and b/eps/hpr4491/hpr4491_image_ext_2_tn.jpeg differ
diff --git a/eps/hpr4493/hpr4493_image_1.png b/eps/hpr4493/hpr4493_image_1.png
new file mode 100644
index 0000000..e3c2c59
Binary files /dev/null and b/eps/hpr4493/hpr4493_image_1.png differ
diff --git a/eps/hpr4493/hpr4493_image_1_tn.png b/eps/hpr4493/hpr4493_image_1_tn.png
new file mode 100644
index 0000000..37a6032
Binary files /dev/null and b/eps/hpr4493/hpr4493_image_1_tn.png differ
diff --git a/eps/hpr4499/hpr4499_image_ext_1.jpeg b/eps/hpr4499/hpr4499_image_ext_1.jpeg
new file mode 100644
index 0000000..c80fa94
Binary files /dev/null and b/eps/hpr4499/hpr4499_image_ext_1.jpeg differ
diff --git a/eps/hpr4502/hpr4502_image_1.png b/eps/hpr4502/hpr4502_image_1.png
new file mode 100755
index 0000000..f04a0d0
Binary files /dev/null and b/eps/hpr4502/hpr4502_image_1.png differ
diff --git a/eps/hpr4502/hpr4502_image_1_tn.png b/eps/hpr4502/hpr4502_image_1_tn.png
new file mode 100755
index 0000000..6d2ca23
Binary files /dev/null and b/eps/hpr4502/hpr4502_image_1_tn.png differ
diff --git a/eps/hpr4517/hpr4517_image_1.jpeg b/eps/hpr4517/hpr4517_image_1.jpeg
new file mode 100755
index 0000000..9cbf84f
Binary files /dev/null and b/eps/hpr4517/hpr4517_image_1.jpeg differ
diff --git a/eps/hpr4517/hpr4517_image_10.jpeg b/eps/hpr4517/hpr4517_image_10.jpeg
new file mode 100755
index 0000000..f10ffe1
Binary files /dev/null and b/eps/hpr4517/hpr4517_image_10.jpeg differ
diff --git a/eps/hpr4517/hpr4517_image_10_tn.jpeg b/eps/hpr4517/hpr4517_image_10_tn.jpeg
new file mode 100755
index 0000000..e1054c0
Binary files /dev/null and b/eps/hpr4517/hpr4517_image_10_tn.jpeg differ
diff --git a/eps/hpr4517/hpr4517_image_1_tn.jpeg b/eps/hpr4517/hpr4517_image_1_tn.jpeg
new file mode 100755
index 0000000..e89941c
Binary files /dev/null and b/eps/hpr4517/hpr4517_image_1_tn.jpeg differ
diff --git a/eps/hpr4517/hpr4517_image_2.jpeg b/eps/hpr4517/hpr4517_image_2.jpeg
new file mode 100755
index 0000000..7abf0f2
Binary files /dev/null and b/eps/hpr4517/hpr4517_image_2.jpeg differ
diff --git a/eps/hpr4517/hpr4517_image_2_tn.jpeg b/eps/hpr4517/hpr4517_image_2_tn.jpeg
new file mode 100755
index 0000000..7409504
Binary files /dev/null and b/eps/hpr4517/hpr4517_image_2_tn.jpeg differ
diff --git a/eps/hpr4517/hpr4517_image_3.jpeg b/eps/hpr4517/hpr4517_image_3.jpeg
new file mode 100755
index 0000000..c6e5305
Binary files /dev/null and b/eps/hpr4517/hpr4517_image_3.jpeg differ
diff --git a/eps/hpr4517/hpr4517_image_3_tn.jpeg b/eps/hpr4517/hpr4517_image_3_tn.jpeg
new file mode 100755
index 0000000..5c85458
Binary files /dev/null and b/eps/hpr4517/hpr4517_image_3_tn.jpeg differ
diff --git a/eps/hpr4517/hpr4517_image_4.jpeg b/eps/hpr4517/hpr4517_image_4.jpeg
new file mode 100755
index 0000000..6121f05
Binary files /dev/null and b/eps/hpr4517/hpr4517_image_4.jpeg differ
diff --git a/eps/hpr4517/hpr4517_image_4_tn.jpeg b/eps/hpr4517/hpr4517_image_4_tn.jpeg
new file mode 100755
index 0000000..80ea69b
Binary files /dev/null and b/eps/hpr4517/hpr4517_image_4_tn.jpeg differ
diff --git a/eps/hpr4517/hpr4517_image_5.jpeg b/eps/hpr4517/hpr4517_image_5.jpeg
new file mode 100755
index 0000000..da93085
Binary files /dev/null and b/eps/hpr4517/hpr4517_image_5.jpeg differ
diff --git a/eps/hpr4517/hpr4517_image_5_tn.jpeg b/eps/hpr4517/hpr4517_image_5_tn.jpeg
new file mode 100755
index 0000000..00e1500
Binary files /dev/null and b/eps/hpr4517/hpr4517_image_5_tn.jpeg differ
diff --git a/eps/hpr4517/hpr4517_image_6.jpeg b/eps/hpr4517/hpr4517_image_6.jpeg
new file mode 100755
index 0000000..1eb55ad
Binary files /dev/null and b/eps/hpr4517/hpr4517_image_6.jpeg differ
diff --git a/eps/hpr4517/hpr4517_image_6_tn.jpeg b/eps/hpr4517/hpr4517_image_6_tn.jpeg
new file mode 100755
index 0000000..8472f3d
Binary files /dev/null and b/eps/hpr4517/hpr4517_image_6_tn.jpeg differ
diff --git a/eps/hpr4517/hpr4517_image_7.jpeg b/eps/hpr4517/hpr4517_image_7.jpeg
new file mode 100755
index 0000000..2b09b53
Binary files /dev/null and b/eps/hpr4517/hpr4517_image_7.jpeg differ
diff --git a/eps/hpr4517/hpr4517_image_7_tn.jpeg b/eps/hpr4517/hpr4517_image_7_tn.jpeg
new file mode 100755
index 0000000..508d797
Binary files /dev/null and b/eps/hpr4517/hpr4517_image_7_tn.jpeg differ
diff --git a/eps/hpr4517/hpr4517_image_8.jpeg b/eps/hpr4517/hpr4517_image_8.jpeg
new file mode 100755
index 0000000..4d094c0
Binary files /dev/null and b/eps/hpr4517/hpr4517_image_8.jpeg differ
diff --git a/eps/hpr4517/hpr4517_image_8_tn.jpeg b/eps/hpr4517/hpr4517_image_8_tn.jpeg
new file mode 100755
index 0000000..9237655
Binary files /dev/null and b/eps/hpr4517/hpr4517_image_8_tn.jpeg differ
diff --git a/eps/hpr4517/hpr4517_image_9.jpeg b/eps/hpr4517/hpr4517_image_9.jpeg
new file mode 100755
index 0000000..0161bd9
Binary files /dev/null and b/eps/hpr4517/hpr4517_image_9.jpeg differ
diff --git a/eps/hpr4517/hpr4517_image_9_tn.jpeg b/eps/hpr4517/hpr4517_image_9_tn.jpeg
new file mode 100755
index 0000000..43334cf
Binary files /dev/null and b/eps/hpr4517/hpr4517_image_9_tn.jpeg differ
diff --git a/eps/hpr4532/hpr4532_image_1.png b/eps/hpr4532/hpr4532_image_1.png
new file mode 100644
index 0000000..43dc82f
Binary files /dev/null and b/eps/hpr4532/hpr4532_image_1.png differ
diff --git a/eps/hpr4532/hpr4532_image_1_tn.png b/eps/hpr4532/hpr4532_image_1_tn.png
new file mode 100644
index 0000000..531ecba
Binary files /dev/null and b/eps/hpr4532/hpr4532_image_1_tn.png differ
diff --git a/eps/hpr4532/hpr4532_image_2.png b/eps/hpr4532/hpr4532_image_2.png
new file mode 100644
index 0000000..1aa4dd3
Binary files /dev/null and b/eps/hpr4532/hpr4532_image_2.png differ
diff --git a/eps/hpr4532/hpr4532_image_2_tn.png b/eps/hpr4532/hpr4532_image_2_tn.png
new file mode 100644
index 0000000..502f281
Binary files /dev/null and b/eps/hpr4532/hpr4532_image_2_tn.png differ
diff --git a/eps/hpr4532/hpr4532_image_3.jpeg b/eps/hpr4532/hpr4532_image_3.jpeg
new file mode 100644
index 0000000..7d48f87
Binary files /dev/null and b/eps/hpr4532/hpr4532_image_3.jpeg differ
diff --git a/eps/hpr4532/hpr4532_image_3_tn.jpeg b/eps/hpr4532/hpr4532_image_3_tn.jpeg
new file mode 100644
index 0000000..4ce9e38
Binary files /dev/null and b/eps/hpr4532/hpr4532_image_3_tn.jpeg differ