Remember this long, intricate command from the preface?
$ paste <(echo {1..10}.jpg | sed 's/ /\n/g') \ <(echo {0..9}.jpg | sed 's/ /\n/g') \ | sed 's/^/mv /' \ | bash
Such magical incantations are called brash one-liners.1Let’s take this one apart to understand what it does and how it works.The innermost echo
commands use brace expansion to generate lists ofJPEG filenames:
$ echo {1..10}.jpg1.jpg 2.jpg 3.jpg ... 10.jpg$ echo {0..9}.jpg0.jpg 1.jpg 2.jpg ... 9.jpg
Piping the filenames to sed
replaces space characters with newlines:
$ echo {1..10}.jpg | sed 's/ /\n/g'1.jpg2.jpg⋮10.jpg$ echo {0..9}.jpg | sed 's/ /\n/g'0.jpg1.jpg⋮9.jpg
The paste
command prints the two lists side by side. Processsubstitution allows paste
to read the two lists as if they werefiles:
$ paste <(echo {1..10}.jpg | sed 's/ /\n/g') \ <(echo {0..9}.jpg | sed 's/ /\n/g')1.jpg 0.jpg2.jpg 1.jpg⋮10.jpg 9.jpg
Prepending mv
to each line prints a sequence of strings that aremv
commands:
$ paste <(echo {1..10}.jpg | sed 's/ /\n/g') \ <(echo {0..9}.jpg | sed 's/ /\n/g') \ | sed 's/^/mv /'mv 1.jpg 0.jpgmv 2.jpg 1.jpg⋮mv 10.jpg 9.jpg
The purpose of the command is now revealed: it generates 10 commandsto rename the image files 1.jpg through 10.jpg. The new names are0.jpg through 9.jpg, respectively. Piping the output to bash
executes the mv
commands:
$ paste <(echo {1..10}.jpg | sed 's/ /\n/g') \ <(echo {0..9}.jpg | sed 's/ /\n/g') \ | sed 's/^/mv /' \ | bash
Brash one-liners are like puzzles. You’re faced with a businessproblem, such as renaming a set of files, and you apply your toolboxto construct a Linux command to solve it. Brash one-liners challengeyour creativity and build your skills.
In this chapter, you’ll create brash one-liners like the precedingone, step-by-step, using the following magical formula:
Invent a command that solves a piece of the puzzle.
Run the command and check the output.
Recall the command from history and tweak it.
Repeat steps 2 and 3 until the command produces the desired result.
This chapter will give your brain a workout. Expect to feel puzzled attimes by the examples. Just take things one step at a time, and runthe commands on a computer as you read them.
Note
Some brash one-liners in this chapter are too wide for a single line,so I’ve split them onto multiple lines using backslashes. We do not,however, call them brash two-liners (or brash seven-liners).
Before you launch into creating brash one-liners, take a moment to getinto the right mindset:
Be flexible.
Think about where to start.
Know your testing tools.
I’ll discuss each of these ideas in turn.
Be Flexible
A key to writing brash one-liners is flexibility. You’ve learnedsome awesome tools by this point—a core set of Linux programs (andumpteen ways to run them) along with command history, command-lineediting, and more. You can combine these tools in many ways, and agiven problem usually has multiple solutions.
Even the simplest Linux tasks can be accomplished in manyways. Consider how you might list.jpg files in your current directory. I’ll bet 99.9% of Linux userswould run a command like this:
$ ls *.jpg
But this is just one solution of many. For example, you could list allthe files in the directory and then use grep
to match only thenames ending in .jpg:
$ ls | grep '\.jpg$'
Why would you choose this solution? Well, you saw an example in“Long Argument Lists”, when a directory contained so many files thatthey couldn’t be listed by pattern matching. The technique ofgrepping for a filename extension is a robust, general approach forsolving all sorts of problems. What’s important here is to beflexible and understand your tools so you can apply the best onein your time of need. That is a wizard’s skill when creatingbrash one-liners.
All of the following commands list .jpg files in the currentdirectory. Try to figure out how each command works:
$ echo $(ls *.jpg)$ bash -c 'ls *.jpg'$ cat <(ls *.jpg)$ find . -maxdepth 1 -type f -name \*.jpg -print$ ls > tmp && grep '\.jpg$' tmp && rm -f tmp$ paste <(echo ls) <(echo \*.jpg) | bash$ bash -c 'exec $(paste <(echo ls) <(echo \*.jpg))'$ echo 'monkey *.jpg' | sed 's/monkey/ls/' | bash$ python -c 'import os; os.system("ls *.jpg")'
Are the results identical or do some commands behave a bitdifferently? Can you come up with any other suitable commands?
Think About Where to Start
Every brash one-liner begins with the output of a simple command.That output might be the contents of a file, part of a file, adirectory listing, a sequence of numbers or letters, a list of users,a date and time, or other data. Your first challenge, therefore, is toproduce the initial data for your command.
For example, if you want to know the 17th letter of the Englishalphabet, then your initial data could be 26 letters produced bybrace expansion:
$ echo {A..Z}A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Once you can produce this output, the next step is deciding how tomassage it to fit your goal. Do you need to slice the output by rowsor columns? Join the output with other information? Transform theoutput in a more complicated way? Look to the programs inChapters 1 and 5 to do that work, likegrep
and sed
and cut
, and apply them using the techniques ofChapter7.
For this example, you could print the 17th field with awk
, or removespaces with sed
and locate the 17th character with cut
:
$ echo {A..Z} | awk '{print $(17)}'Q$ echo {A..Z} | sed 's/ //g' | cut -c17Q
As another example, if you want to print the months of the year, yourinitial data could be the numbers 1 through 12, again produced bybrace expansion:
$ echo {1..12}1 2 3 4 5 6 7 8 9 10 11 12
From there, augment the brace expansion so it forms dates for thefirst day of each month (from 2021-01-01
through 2021-12-01
); thenrun date -d
on each line to produce month names:
$ echo 2021-{01..12}-01 | xargs -n1 date +%B -dJanuaryFebruaryMarch⋮December
Or, suppose you want to know the length of the longest filename in thecurrent directory. Your initial data could be a directory listing:
$ lsanimals.txt cartoon-mascots.txt ... zebra-stripes.txt
From there, use awk
to generate commands to count characters in eachfilename with wc -c
:
$ ls | awk '{print "echo -n", $0, "| wc -c"}'echo -n "animals.txt" | wc -cecho -n "cartoon-mascots.txt | wc -c"⋮echo -n "zebra-stripes.txt | wc -c"
(The -n
option prevents echo
from printing newline characters,which would throw off each count by one.) Finally, pipe the commandsto bash
to run them, sort the numeric results from high to low, andgrab the maximum value (the first line) with head -n1
:
$ ls | awk '{print "echo -n", $0, "| wc -c"}' | bash | sort -nr | head -n123
This last example was tricky, generating pipelines as stringsand passing them to a further pipeline. Nevertheless, the generalprinciple is the same: figure out your starting data and manipulateit to fit your needs.
Know Your Testing Tools
Building a brash one-liner may require trial and error. The followingtools and techniques will help you try different solutions quickly:
- Use command history and command-line editing.
Don’t retype commands while you experiment. Use techniques fromChapter3 to recall previous commands, tweak them, and run them.
- Add
echo
to test your expressions. If you aren’t sure how an expression will evaluate, print it with
echo
beforehand to see the evaluated results on stdout.- Use
ls
or addecho
to test destructive commands. If your command invokes
rm
,mv
,cp
, or other commands thatmight overwrite or remove files, placeecho
in front of them toconfirm which files will be affected. (So, instead of executingrm
,executeecho rm
.) Another safety tactic is to replacerm
withls
to list files that would be removed.- Insert a
tee
to view intermediate results. If you want to view the output (stdout) in the middle of a longpipeline, insert the
tee
command to save output to a file forexamination. The following command saves the output fromcommand3
inthe file outfile, while piping that same output tocommand4
:$ command1 | command2 | command3 | tee outfile | command4 | command5$ less outfile
OK, let’s build some brash one-liners!
This brash one-liner is similar to the one that opened the chapter(renaming .jpg files), but more detailed. It’s also a real situationI faced while writing this book. Like the previous one-liner, itcombines two techniques from Chapter7: processsubstitution and piping to bash
. The result is a repeatable patternfor solving similar problems.
I wrote this book on a Linux computer using a typesetting languagecalled AsciiDoc. The language details aren’timportant here; what matters is each chapter was a separate file, andoriginally there were 10 of them:
$ lsch01.asciidoc ch03.asciidoc ch05.asciidoc ch07.asciidoc ch09.asciidocch02.asciidoc ch04.asciidoc ch06.asciidoc ch08.asciidoc ch10.asciidoc
At some point, I decided to insert an 11th chapter between Chapters2 and 3. That meant renaming some files. Chapters 3–10 had to become4–11, leaving a gap so I could make a new Chapter 3(ch03.asciidoc). I could have renamed the files manually, startingwith ch11.asciidoc and working backward:2
$ mv ch10.asciidoc ch11.asciidoc$ mv ch09.asciidoc ch10.asciidoc$ mv ch08.asciidoc ch09.asciidoc⋮$ mv ch03.asciidoc ch04.asciidoc
But this method is tedious (imagine if there were 1,000 files instead of11!), so instead, I generated the necessary mv
commands and pipedthem to bash
. Take a good look at the preceding mv
commands andthink for a moment how you might create them.
Focus first on the original filenames ch03.asciidoc throughch10.asciidoc. You could print them using brace expansion such asch{10..03}.asciidoc
, like the first example in this chapter, but topractice a little flexibility, use the seq -w
command to print thenumbers:
$ seq -w 10 -1 3100908⋮03
Then turn this numeric sequence into filenames by piping it to sed
:
$ seq -w 10 -1 3 | sed 's/\(.*\)/ch\1.asciidoc/'ch10.asciidocch09.asciidoc⋮ch03.asciidoc
You now have a list of the original filenames. Do likewise forChapters 4–11 to create the destination filenames:
$ seq -w 11 -1 4 | sed 's/\(.*\)/ch\1.asciidoc/'ch11.asciidocch10.asciidoc⋮ch04.asciidoc
To form the mv
commands, you need to print the original and newfilenames side by side. The first example in this chapter solved the“side by side” problem with paste
, and it used process substitutionto treat the two printed lists as files. Do the same here:
$ paste <(seq -w 10 -1 3 | sed 's/\(.*\)/ch\1.asciidoc/') \ <(seq -w 11 -1 4 | sed 's/\(.*\)/ch\1.asciidoc/')ch10.asciidoc ch11.asciidocch09.asciidoc ch10.asciidoc⋮ch03.asciidoc ch04.asciidoc
Tip
The preceding command might look like a lot of typing, but withcommand history and Emacs-style command-line editing, it’sreally not. To go from the single “seq
and sed
” line to thepaste
command:
Recall the previous command from history with the up arrow.
Press Ctrl-A and then Ctrl-K to cut the whole line.
Type the word
paste
followed by a space.Press Ctrl-Y twice to create two copies of the
seq
andsed
commands.Use movement and editing keystrokes to modify the second copy.
And so on.
Prepend mv
to each line by piping the output to sed
, printingexactly the mv
commands you need:
$ paste <(seq -w 10 -1 3 | sed 's/\(.*\)/ch\1.asciidoc/') \ <(seq -w 11 -1 4 | sed 's/\(.*\)/ch\1.asciidoc/') \ | sed 's/^/mv /'mv ch10.asciidoc ch11.asciidocmv ch09.asciidoc ch10.asciidoc⋮mv ch03.asciidoc ch04.asciidoc
As the final step, pipe the commands to bash
for execution:
$ paste <(seq -w 10 -1 3 | sed 's/\(.*\)/ch\1.asciidoc/') \ <(seq -w 11 -1 4 | sed 's/\(.*\)/ch\1.asciidoc/') \ | sed 's/^/mv /' \ | bash
I used exactly this solution for my book. After the mv
commands ran,the resulting files were Chapters 1, 2, and 4–11, leaving a gap for anew Chapter 3:
$ ls ch*.asciidocch01.asciidoc ch04.asciidoc ch06.asciidoc ch08.asciidoc ch10.asciidocch02.asciidoc ch05.asciidoc ch07.asciidoc ch09.asciidoc ch11.asciidoc
The pattern I just presented is reusable in all kinds of situationsto run a sequence of related commands:
Generate the command arguments as lists on stdout.
Print the lists side by side with
paste
and process substitution.Prepend a command name with
sed
by replacing the beginning-of-linecharacter (^
) with a program name and a space.Pipe the results to
bash
.
This brash-one liner is inspired by a real use of Mediawiki, thesoftware that powers Wikipedia and thousands of other wikis. Mediawikiallows users to upload images for display. Most users follow a manualprocess via web forms: click Choose File to bring up a file dialog,navigate to an image file and select it, add a descriptive comment inthe form, and click Upload. Wiki administrators use a more automatedmethod: a script that reads a whole directory and uploads itsimages. Each image file (say, bald_eagle.jpg) is paired with atext file (bald_eagle.txt) containing a descriptive comment aboutthe image.
Imagine that you’re faced with a directory filled with hundreds ofimage files and text files. You want to confirm that every image filehas a matching text file and vice versa. Here’s a smaller version ofthat directory:
$ lsbald_eagle.jpg blue_jay.jpg cardinal.txt robin.jpg wren.jpgbald_eagle.txt cardinal.jpg oriole.txt robin.txt wren.txt
Let’s develop two different solutions to identify any unmatchedfiles. For the first solution, create two lists, one for the JPEGfiles and one for the text files, and use cut
to strip off theirfile extensions .txt and .jpg:
$ ls *.jpg | cut -d. -f1bald_eagleblue_jaycardinalrobinwren$ ls *.txt | cut -d. -f1bald_eaglecardinaloriolerobinwren
Then compare the lists with diff
using process substitution:
$ diff <(ls *.jpg | cut -d. -f1) <(ls *.txt | cut -d. -f1)2d1< blue_jay3a3> oriole
You could stop here, because the output indicates that the first listhas an extra blue_jay (implying blue_jay.jpg) and the secondlist has an extra oriole (implying oriole.txt). Nevertheless,let’s make the results more precise. Eliminate unwanted lines bygrepping for the characters <
and >
at the beginning of each line:
$ diff <(ls *.jpg | cut -d. -f1) <(ls *.txt | cut -d. -f1) \ | grep '^[<>]'< blue_jay> oriole
Then use awk
to append the correct file extension to each filename($2
), based on whether the filename is preceded by a leading <
or >
:
$ diff <(ls *.jpg | cut -d. -f1) <(ls *.txt | cut -d. -f1) \ | grep '^[<>]' \ | awk '/^</{print $2 ".jpg"} /^>/{print $2 ".txt"}'blue_jay.jpgoriole.txt
You now have your list of unmatched files. However, this solution hasa subtle bug. Suppose the current directory contained the filenameyellow.canary.jpg, which has two dots. The preceding command wouldproduce incorrect output:
blue_jay.jpgoriole.txtyellow.jpg This is wrong
This problem occurs because the two cut
commands remove charactersfrom the first dot onward, instead of the last dot onward, soyellow.canary.jpg is truncated to yellow rather thanyellow.canary. To fix this issue, replace cut
with sed
toremove characters from the last dot to the end of the string:
$ diff <(ls *.jpg | sed 's/\.[^.]*$//') \ <(ls *.txt | sed 's/\.[^.]*$//') \ | grep '^[<>]' \ | awk '/</{print $2 ".jpg"} />/{print $2 ".txt"}'blue_jay.txtoriole.jpgyellow.canary.txt
The first solution is now complete. The second solution takes adifferent approach. Instead of applying diff
to two lists, generatea single list and weed out matched pairs of filenames. Begin byremoving the file extensions with sed
(using the same sed script asbefore) and count the occurrences of each string with uniq -c
:
$ ls *.{jpg,txt} \ | sed 's/\.[^.]*$//' \ | uniq -c 2 bald_eagle 1 blue_jay 2 cardinal 1 oriole 2 robin 2 wren 1 yellow.canary
Each line of output contains either the number 2
, representing a matchedpair of filenames, or 1
, representing an unmatched filename. Useawk
to isolate lines that begin with whitespace and a 1
, and printonly the second field:
$ ls *.{jpg,txt} \ | sed 's/\.[^.]*$//' \ | uniq -c \ | awk '/^ *1 /{print $2}'blue_jayorioleyellow.canary
For the final step, how can you add the missing file extensions? Don’tbother with any complicated string manipulations. Just use ls
tolist the actual files in the current directory. Stick an asterisk (awildcard) onto the end of each line of output with awk
:
$ ls *.{jpg,txt} \ | sed 's/\.[^.]*$//' \ | uniq -c \ | awk '/^ *1 /{print $2 "*"}'blue_jay*oriole*yellow.canary*
and feed the lines to ls
via command substitution. The shellperforms pattern matching, and ls
lists the unmatched filenames.Done!
$ ls -1 $(ls *.{jpg,txt} \ | sed 's/\.[^.]*$//' \ | uniq -c \ | awk '/^ *1 /{print $2 "*"}')blue_jay.jpgoriole.txtyellow.canary.jpg
In the section “Organize Your Home Directory for Fast Navigation”, you wrote a complicated CDPATH
lineby hand. It began with $HOME
, followed by all subdirectories of$HOME
, and ended with the relative path ..
(parent directory):
CDPATH=$HOME:$HOME/Work:$HOME/Family:$HOME/Finances:$HOME/Linux:$HOME/Music:..
Let’s create a brash one-liner to generate that CDPATH
lineautomatically, suitable for insertion into a bash
configurationfile. Begin with the list of subdirectories in $HOME
, using asubshell to prevent the cd
command from changing your shell’scurrent directory:
$ (cd && ls -d */)Family/ Finances/ Linux/ Music/ Work/
Add $HOME/
in front of each directory with sed
:
$ (cd && ls -d */) | sed 's/^/$HOME\//g'$HOME/Family/$HOME/Finances/$HOME/Linux/$HOME/Music/$HOME/Work/
The preceding sed script is slightly complicated because thereplacement string, $HOME/
, contains a forward slash, and sed
substitutions also use the forward slash as a separator. That’s whymy slash is escaped: $HOME\/
. To simplifythings, recall from “Substitution and Slashes” that sed
accepts anyconvenient character as a separator. Let’s use at signs (@
)instead of forward slashes so no escaping is needed:
$ (cd && ls -d */) | sed 's@^@$HOME/@g'$HOME/Family/$HOME/Finances/$HOME/Linux/$HOME/Music/$HOME/Work/
Next, lop off the final forward slash with another sed
expression:
$ (cd && ls -d */) | sed -e 's@^@$HOME/@' -e 's@/$@@'$HOME/Family$HOME/Finances$HOME/Linux$HOME/Music$HOME/Work
Print the output on a single line using echo
and commandsubstitution. Notice that you no longer need plain parentheses aroundcd
and ls
to create a subshell explicitly, because commandsubstitution creates a subshell of its own:
$ echo $(cd && ls -d */ | sed -e 's@^@$HOME/@' -e 's@/$@@')$HOME/Family $HOME/Finances $HOME/Linux $HOME/Music $HOME/Work
Add the first directory $HOME
and the final relative directory ..
:
$ echo '$HOME' \ $(cd && ls -d */ | sed -e 's@^@$HOME/@' -e 's@/$@@') \ ..$HOME $HOME/Family $HOME/Finances $HOME/Linux $HOME/Music $HOME/Work ..
Change spaces to colons by piping all the output so far to tr
:
$ echo '$HOME' \ $(cd && ls -d */ | sed -e 's@^@$HOME/@' -e 's@/$@@') \ .. \ | tr ' ' ':'$HOME:$HOME/Family:$HOME/Finances:$HOME/Linux:$HOME/Music:$HOME/Work:..
Finally, add the CDPATH
environment variable, and you have generateda variable definition to paste into a bash
configuration file. Store thiscommand in a script to generate the line anytime, like when you add anew subdirectory to $HOME
:
$ echo 'CDPATH=$HOME' \ $(cd && ls -d */ | sed -e 's@^@$HOME/@' -e 's@/$@@') \ .. \ | tr ' ' ':'CDPATH=$HOME:$HOME/Family:$HOME/Finances:$HOME/Linux:$HOME/Music:$HOME/Work:..
A common task in the software industry is testing—feeding a widevariety of data to a program to validate that the program behaves asintended. The next brash one-liner generates one thousand files containingrandom text that could be used in software testing. The number one thousandis arbitrary; you can generate as many files as you want.
The solution will select words randomly from a large text file andcreate one thousand smaller files with random contents and lengths. A perfectsource file is the system dictionary /usr/share/dict/words, whichcontains 102,305 words, each on its own line.
$ wc -l /usr/share/dict/words102305 /usr/share/dict/words
To produce this brash one-liner, you’ll need to solve four puzzles:
Randomly shuffling the dictionary file
Selecting a random number of lines from the dictionary file
Creating an output file to hold the results
Running your solution one thousand times
To shuffle the dictionary into random order, use the aptly namedcommand shuf
. Each run of the command shuf /usr/share/dict/words
produces more than a hundred thousand lines of output, so peek at the first fewrandom lines using head
:
$ shuf /usr/share/dict/words | head -n3evermoreshirttailtertiary$ shuf /usr/share/dict/words | head -n3interactivelyoptperjurer
Your first puzzle is solved. Next, how can you select a randomquantity of lines from the shuffled dictionary? shuf
has an option,-n
, to print a given number of lines, but you want the value tochange for each output file you create. Fortunately, bash
has avariable, RANDOM
, that holds a random positive integer between 0 and32,767. Its value changes every time you access the variable:
$ echo $RANDOM $RANDOM $RANDOM7855 11134 262
Therefore, run shuf
with the option -n $RANDOM
to print a randomnumber of random lines. Again, the full output could be very long, sopipe the results to wc -l
to confirm that the number of lineschanges with each execution:
$ shuf -n $RANDOM /usr/share/dict/words | wc -l9922$ shuf -n $RANDOM /usr/share/dict/words | wc -l32465
You’ve solved the second puzzle. Next, you need one thousand output files, ormore specifically, one thousand different filenames. To generate filenames,run the program pwgen
,which generates random strings of letters and digits:
$ pwgeneng9nooG ier6YeVu AhZ7naeG Ap3quail poo2Ooj9 OYiuri9m iQuash0E voo3Eph1IeQu7mi6 eipaC2ti exah8iNg oeGhahm8 airooJ8N eiZ7neez Dah8Vooj dixiV1f*ckiejoti6 ieshei2K iX4isohk Ohm5gaol Ri9ah4eX Aiv1ahg3 Shaew3ko zohB4geu⋮
Add the option -N1
to generatejust a single string, and specify the string length (10) as an argument:
$ pwgen -N1 10ieb2ESheiw
Optionally, make the string look more like the name of a text file, usingcommand substitution:
$ echo $(pwgen -N1 10).txtohTie8aifo.txt
Third puzzle complete! You now have all the tools to generate a singlerandom text file. Use the -o
option of shuf
to save its output ina file:
$ mkdir -p /tmp/randomfiles && cd /tmp/randomfiles$ shuf -n $RANDOM -o $(pwgen -N1 10).txt /usr/share/dict/words
$ ls List the new fileAhxiedie2f.txt$ wc -l Ahxiedie2f.txt How many lines does it contain?13544 Ahxiedie2f.txt$ head -n3 Ahxiedie2f.txt Peek at the first few linessaviorsguerillasforecaster
Looks good! The final puzzle is how to run the preceding shuf
command one thousand times. You could certainly use a loop:
for
i in{
1..1000}
;
do
shuf -n$RANDOM
-o$(
pwgen -N1 10)
.txt /usr/share/dict/wordsdone
but that’s not as fun as creating a brash one-liner.Instead, let’s pregenerate the commands, as strings,and pipe them to bash
. As a test, print your desired command onceusing echo
. Add single quotes to ensure that $RANDOM
doesn’t evaluate and pwgen
doesn’t run:
$ echo 'shuf -n $RANDOM -o $(pwgen -N1 10).txt /usr/share/dict/words'shuf -n $RANDOM -o $(pwgen -N1 10).txt /usr/share/dict/words
This command can easily be piped to bash
for execution:
$ echo 'shuf -n $RANDOM -o $(pwgen -N1 10).txt /usr/share/dict/words' | bash$ lseiFohpies1.txt
Now, print the command one thousand times using the yes
command piped to head
, thenpipe the results to bash
, and you’ve solved the fourth puzzle:
$ yes 'shuf -n $RANDOM -o $(pwgen -N1 10).txt /usr/share/dict/words' \ | head -n 1000 \ | bash$ lsAen1lee0ir.txt IeKaveixa6.txt ahDee9lah2.txt paeR1Poh3d.txtAhxiedie2f.txt Kas8ooJahK.txt aoc0Yoohoh.txt sohl7Nohho.txtCudieNgee4.txt Oe5ophae8e.txt haiV9mahNg.txt uchiek3Eew.txt⋮
If you’d prefer one thousand random image files instead of text files, use thesame technique (yes
, head
, and bash
) and replace shuf
with a commandthat generates a random image. Here’s a brash one-liner that I adapted from asolutionby Mark Setchell on Stack Overflow. It runs the command convert
,from the graphics package ImageMagick, to produce random images ofsize 100 x 100 pixels consisting of multicolored squares:
$ yes 'convert -size 8x8 xc: +noise Random -scale 100x100 $(pwgen -N1 10).png' \ | head -n 1000 \ | bash$ lsBahdo4Yaop.png Um8ju8gie5.png aing1QuaiX.png ohi4ziNuwo.pngEem5leijae.png Va7ohchiep.png eiMoog1kou.png ohnohwu4Ei.pngEozaing1ie.png Zaev4Quien.png hiecima2Ye.png quaepaiY9t.png⋮$ display Bahdo4Yaop.png View the first image
Sometimes all you need for testing is lots of files with differentnames, even if they’re empty. Generating a thousand empty files namedfile0001.txt through file1000.txt is as simple as:
$ mkdir /tmp/empties Create a directory for the files$ cd /tmp/empties$ touch file{01..1000}.txt Generate the files
If you prefer more interesting filenames, grab them randomly from thesystem dictionary. Use grep
to limit the names to lowercase lettersfor simplicity (avoiding spaces, apostrophes, and other characters thatwould be special to the shell):
$ grep '^[a-z]*$' /usr/share/dict/wordsaaardvarkaardvarks⋮
Shuffle the names with shuf
and print the first thousand with head
:
$ grep '^[a-z]*$' /usr/share/dict/words | shuf | head -n1000triplicatingquadruplicatespodiatrists⋮
Finally, pipe the results to xargs
to create the files with touch
:
$ grep '^[a-z]*$' /usr/share/dict/words | shuf | head -n1000 | xargs touch$ lsabases distinctly magnolia saddenabets distrusts maintaining salesaboard divided malformation salmon⋮
I hope the examples in this chapter helped to build your skills inwriting brash one-liners. Several of them provided reusable patternsthat you may find useful in other situations.
One caveat: brash one-liners are not the only solution in town.They’re just one approach to working efficiently at the command line.Sometimes you’ll get more bang for the buck by writing a shellscript. Other times you’ll find better solutions with a programminglanguage such as Perl or Python. Nevertheless, brash one-liner-writingis a vital skill for performing critical tasks with speed and style.
1 The earliest use of this term (that I know of) is the manpage for lorder(1) in BSD Unix 4.x. Thanks to Bob Byrnes for finding it.
2 Starting with ch03.asciidoc and working forward would be dangerous—can you see why? If not, create these files with the command touch ch{01..10}.asciidoc
and try it yourself.