Back to index
Last updated 2017/10/18

This document contains one-line commands that are useful for various things. I don't have many at this time (I use plenty, but only wanted to start this now - 2012/02/09 - after coming up with one that was useful for me). I'll likely add more in time. Feel free to use these anywhere - scripts, whatever. Do not take the file and copy it, please. If you want, link to it or at least give credit where credit is due. Think of your actions and how they will affect others, please.

Changes since 2014/09/06:
1. Loops/Iterations

This section contains loop constructs, e.g. the bash for loop. This allows one to perform the same set of actions several times and that gives the possibility of acting on multiple files (everything in Unix is a file, remember ?).

1.1. Find duplicate lines in multiple files

Need to search a type of file (e.g. *.cpp) that has duplicate #include 's ? This is something that is normally protected with include guards, but what if you accidentally include anyway ? Well there isn't any point and only adds time (no matter how small it can add up) to compile time. So, to show the files with duplicate #include's, what the line is and which file its in, you can do something like this :

$ for i in *.cpp; do grep -H "#include" "$i"| sort - | uniq -d; done

This simply loops through each file ending with .cpp in the current working directory for #include. Note the option -H. That means print the file name (normally only is used when multiple files are searched). However, we also need to sort _each_ file and more specifically individually. This is because uniq -d only detects lines that are duplicates IF they are adjacent. So we use -H to show the file. Then, we specify what we're looking for (simply "#include" in this case) and "$i" is the file in question. Then we pipe it to sort (the - means stdout = the output of the previous command - indeed this is the power of pipes). The sort will sort the file, and then we pipe that to uniq -d which shows duplicate adjacent lines. The rest is the ending of the for construct.

1.2. Resize a selection of images and change extension (updated 2013/04/02)

Let's say your camera names files in caps. Let's also say its too large for normal display (or upload). With the package called ImageMagick installed, you should have a program called 'convert'. With a bash for loop, you can not only resize to a more reasonable size, you can also change the JPG (or PNG/whatever) to its lower case counterpart. Further, it won't overwrite the originals. Here's how:

$ for i in *.JPG; do convert -resize 640x480 "$i" $(echo "$i"|sed 's/JPG/jpg/g'); done

All it does: loop through every file in the current directory that ends with .JPG; for each one, run convert on it. Specifically, what we do is convert -resize. The first "$i" means input file. Now, the next part is the part that changes the extension. Essentially, what we're doing is 'echo' the file name. But we're not done yet. We pass that name to sed (via a pipe) and substitute JPG for jpg. Since we also have it grouped as in $( ... ) the result of the whole command is the output file of convert. Note the old way is to use backquotes / grave accents (i.e. `). This works still, and in some cases is still useful, but in the case of this command line, $() works just as well.

2013/04/02 update:

The above is a bit limited though in that it only will process files in the current directory ending with .JPG and what if you don't want to process all the files with those parameters ? The convert program acts on one file so you can't just list all of them in one command line (without a loop of some kind). What you can do however is change the logic so that it can work on any file extension and optionally only files you specify. The below is how you can process any extension; the rest of the thoughts I will leave as an exercise to the reader with a hint: the shell command 'shift' can be very powerful in scripts to move through a series of strings but always have it in the same variable (so you don't have to keep track of where you are in the list). Note also that the below is not exactly a one-line algorithm but it still fits in the specific topic and some times you need to save the result of a command for the next step. Lastly in the below we will use the variable NAME (or when referenced $NAME) to store the name of the file we are processing, the variable EXT for the file extension and the variable OUTFILE for the file we will save the converted image to.

$ EXT=$(echo "$NAME"|sed 's/^.*\.\(.*\)$/\1/g')
$ BASENAME=$(basename -s .$(echo $EXT) $(echo $NAME))
$ OUTFILE=$(echo $BASENAME$(echo _640x480)).$EXT
$ OUTFILE=$(echo $OUTFILE|tr '[A-Z]' '[a-z]')
$ /usr/bin/convert "$NAME" -resize 640x480 "$OUTFILE"

What does this do? I'll give a brief explanation for each line. 1) Here we are simply getting the extension of the file name and storing it in EXT. If it ends with .Jpg then $EXT will be 'Jpg' and if it ends with .PNG.png then $EXT will be 'png' (because we check for the end of the line too (the $ character). 2) We want the base name (exclude the extension). We pass the -s option to basename which allows us to remove the suffix given (in this case it is a dot followed by whatever is in EXT). In the end the variable BASENAME will be the file name up until the character before the last . in the file name. 3) First part of determining the output file name: it is $BASENAME followed by _640x480 a dot and then what is in $EXT 4) Then we change any upper case letter to lower case and store it in OUTFILE (which we first determined at line 3). 5) The actual convert command.

1.3. Remove all but the X most recent files in a directory (added 2012/07/09, updated 2013/04/02)

2017/10/18 update: Warning: I should really point out that this command has some flaws in some cases. In my case the files in question are uniquely named and have no spaces, newlines or anything of that nature and so the command works; but it's not always that simple: I was further experimenting just now and actually with the default IFS even having one file with spaces in it (quoted or not) won't be deleted; and although the '-f' option to `rm' would negate the case where the file might be repeated more than once (much like: `rm file1* file11' would) it's not that simple when it comes to deletion. So this command is rather limited in functionality. In the end of course spaces etc. in file names are problematic and should be avoided; put another way: this command should only be used if you yourself have complete control over the file names and there is no chance that there can be any mischief! Is there another way? Possibly but I don't have the time or energy to care about it; I however felt obligated to point this out. Incidentally whenever you want to loop through files (even just by '*') it's useful to add `echo' in front of the commands which reference each item in the loop (example below). With that warning in mind:

The task is easy enough to understand; you might have a directory full of files (mysql dumps was what I had a directory full of - daily for the past year) and you want to get rid all of them except the 20 most recent. So one little pipeline does this task.

$ for file in $(ls -1|head -n -20); do rm -f "$file"; done

Simply put, ls -1 (that's a digit one not the letter 'ell') shows each file by itself in order from oldest to newest (by default). So what this does is places each of those file names in the variable 'file'. But observe the pipe - we show all but the last 20. And the action per each loop is rm -f the file. That's all there is to it. Note you should generally confirm actions like this if you are not sure of it, by say, placing 'echo' before the rm -f (or the potentially dangerous command). So if you wanted to list the files that it'd delete you could do the below. Then when you're done confirming, simply remove the echo so the command itself is executed and not just displayed.

$ for file in $(ls -1|head -n -20); do echo rm -f "$file"; done
2. System Administration

This section has one-line commands that are useful system administration.

2.1. Find accounts with no passwords (and variations)

If you remember/realise that awk and sed both have grep functionality, then you also realise/know that you can do this with all three (and others even). So, an account with no password needed is generally an empty field. I'm not referring to password being disabled.

The second field in /etc/shadow (also /etc/passwd - and I pray you have shadow package installed, for yourself and others(!)) is the password entry. The fields are separated by :'s. Therefore, to see if no password exists, you can do the below - any of the commands as they are all equivalent.

(Note if either of these files that has a line starting with a field separator, you have a more important issue to resolve.)

# grep -E "^[a-zA-Z0-9]+::" /etc/shadow
# awk '/^[a-zA-Z0-9]+::/' /etc/shadow
# sed -n '/^[A-Za-z0-9]\+::/p' /etc/shadow

In more specific terms, what are we searching for? It starts out with '^' which means check start of line. If the pattern is not at the beginning of a line, we won't see it. Then, the [..] construct is a range. a-zA-Z0-9 means any alphanumeric character. If we put a ^ in the beginning of it, as in [^a-z] then it'd mean invert the test.

In any case, then we want at least 1 and possibly more than one of these characters. So we give it the + (which means exactly that in regular expressions). Now, since a password not existing means that the field is empty, we expect to see two ::'s after the above.

The grep is more simple, but the others have other possibilities. See awk(1) and sed(1) man pages for better ideas. An example is you could remove lines that are empty (not that I'm recommending this - better would be to get the list and then fix it [if it's part of a program, figure out why, or remove the program and then the user]). In short, never have a user without a password. That's asking for trouble!

2.2. Tarball Extraction Transformation (change name of output files while extracting)

I found out about this a couple weeks ago. I was indeed looking for such a feature to do what I needed: if a tarball base/root directory is 'program/' and I don't want to move the files in that directory to the directory I want it, is there a way? Upon inspecting the man page of tar, I found something very interesting and immediately knew it would work:

--transform=EXPRESSION, --xform=EXPRESSION
use sed replace EXPRESSION to transform file names

File name matching options (affect both exclude and include patterns):

Awesome. While I'm not a guru at sed, I certainly know enough to make use of this in more than enough ways. And one of those ways is indeed what I needed.

In my case I simply needed to replace the base directory with nothing (blank space). I could also apply it to certain files, but in any case, here's how I would approach the situation I mentioned.

$ tar xvf archive.tar.gz --transform='s/program\///g'

What that does is strips off the program/ in the file names as it is extracting it. Note you can't play around this by listing the files and not extracting because it still shows the original file names. But as long as there won't be a name clash (which for instance would overwrite the files with name clashes), it is really just a matter of extracting and if you do the sed transformation it will do what the transformation would do.

2.3. Tarball Extraction Remove Parent directories (added 2013/03/29)

Of course, as is common in Unix, there exists another way to do the above - something that I missed before but something that is for what I needed, better. The above actually stripped the directory off by its name and worse is that I did not check that there is not another directory below the root that also is called 'program/'. This means that program/program/ would have been turned into what program/ was. How would you correct this? There are two ways and I'll show both here (both for completeness). Using transformation we would do this :

$ tar xvf archive.tar.gz --transform='s/^program\///g'

The ^ in this context means it is the beginning of the line. In other contexts (such as the first character in a [...] selection) it means something else (i.e. any character but the character that follows it up until the ]). However, because we're dealing with directories we have a better option: we will just specify how many components to remove from the files in the archive :

$ tar xvf archive.tar.gz --strip-components=1
3. grep, sed & awk

This section shows basic usage of very powerful pattern matching utilities. There are books on the subject and I by no means profess to know these utilities in and out. This section started because I wanted to edit some files in place but only if they matched a certain pattern. Since pattern includes exclusion (in conclusion ...) and since I often forget the exclusion syntax of sed and awk, I wanted to write this to remind myself in the future but more so brand it in to my memory.

3.1) sed: grep with power

grep is very powerful. But since sed has grep functionality, it is even more powerful. With sed being a stream editor, one might have the need to edit a stream (which can be a text file, for example) but only edit certain patterns or even certain patterns in a certain range or only if the pattern is found in a line that doesn't match (which one might argue is then its own pattern and they might be right in the end). This is how you use sed to match the most basic forms of grep:

$ grep <string> <file>
$ grep -v <string> <file>

where the first finds any line in file that includes string and the second finds any line in file that does not include string. This is very basic and there is much more sed can do. Nevertheless, it is useful because you can do other commands than printing. Indeed, this is a contrived example but it is an example that demonstrates how to integrate other sed commands for certain patterns only.

$ sed -n '/#include/p' source.c
$ sed -n '/#include/!p' source.c

The sed option -n means do not print lines unless instructed to by a print command (p). The first example searches for lines that include the text '#include'.

The second example is the reverse. sed is line based so each line is processed and when it finds a line that does not have #include The ! means not and so it inverts the search. In more general terms, to use sed like grep you do:

$ sed '/pattern/[command]' <file>
$ sed '/pattern/{[command1;command2;commandN}]' <file>

... where [command] refers to a sed command (see your every day sed book or manual for more information).

3.1) awk: grep with power

grep is very powerful. But since awk has grep functionality, it is even more powerful. This is just like sed except awk is not a stream editor. It does however work on streams, too except that it has different uses than awk (although some of the uses certainly work with both as I have shown in previous sections). While the sed example (with the similar title as this) shows the full line, in this example I search for the same pattern but only print the exact pattern (much like grep option -o). In other words, the example is equivalent to (we don't have an exclusion because we are printing text that matches; however, I will show the general form of inversion which actually is equivalent to the second sed command - what we don't have here is an equivalent to grep's -o option)):

$ grep <string> <file>

The same uses (doing certain tasks) as sed (but different tasks here) applies to why this might be used instead of grep.

$ awk '/#include/{print $1}' source.c
$ awk '!/#include/{print $0}' source.c

There is a difference here from the first sed example. This only shows the #include by itself. If you wanted to show the whole line, you would use $0 instead of $1 (see second example). Similarly, if you had a reason to believe there were other fields (read as column) you could use $2, etc. (but there may very well be a limit depending on system if not awk itself). You can also change what field separate you wish to use with awk. To do that you use -F like -F: to separate fields by : instead of blanks. The second example is in fact shows every line that does not include (I love puns, what can I do? Make them as I do) #include