wget -r -l1 -p -np -k -nH -P 'outdir' http://www.aaa/bbb.html #download bbb.html #-r: recursivelly #-l1: up to level 1 links, that is, download only the pages imediately linked to by www.aaa/bbb.html # (-l1, could be -l2, -l3, ... for nth level links) #-p: saving all necessary files to view the page locally (images, css) #-np: not downloading parent directories pages #-k: converting links in htmls that were downloaded into local links for offline viewing #-nH: not creating a directory 'www.aaa' as would be default #-P 'outdir': saving output inside a directory 'outdir', and creating it if it doesn't exist
make a multipage tex document with INPUT_PATH
pdflatex -synctex=1 "$INPUT_PATH"
this should generate “asdf.pdf” and the synctex file ( because of -synctex=1)
run synctex as
synctex view -i 77:1:/path/to/tex/file/./asdf.tex -o asdf.pdf
77:1 means you are on line 77 column 1 of the tex file, and you want to get the position on the pdf. Note the ugly “/./” between the parent directory and the basename. It must be there!
This is SyncTeX command line utility, version 1.2 SyncTeX result begin Output:report.pdf Page:2 x:261.757507 y:492.330658 h:255.095062 v:495.319450 W:100.061378 H:13.902562 before: offset:0 middle: after: SyncTeX result end
amongst other things, this is telling you that this position on the tex file corresponds to Page 2 of your pdf.
Now all you have to do is to tell your pdf viewer to open that page. In Okular this would be:
okular -p 2 asdf.pdf
Bash suffers from a few disadvantages
- Linux only
- quite complicated variable/parameters passing
which in my opinion make python a better choice for anything that is not a one liner.
On the other hand, learning bash and the basic Linux commands can be very useful for a programmer
- interface. the bash command line contains interface that is widely recognized and that has survived the test of time, meaning that you will be able to understand others programs and create programs that are more easily understandable. For example, if you name a method ‘find’, which does more or less the same as the Linux command, chances are that others will immediately understand and remember this, which is a crucial step to coping with complexity of programs.
- raw power. some standard bash utilities are so amazingly powerful for the amount of code that you have to write, that it is just worth learning it.
sudo apt-get install fdupes fdupes -rd dirA dirB dirC
Will look for duplicates
- under relative paths dirA, dirB and dirC
- recursively (-r)
- ask print duplicate paths one by one and ask which you want to keep (can also type all to keep all) and delete the others (-d)
echo 'cd ..' > test && chmod +x test
does not nothing because ./test is executed is a subshell
works, because the ‘.’ operator does does the same thing as copying the file test and pasting inside the current shell
Linux command line is such a powerhouse it already comes with a pdf command! pdf –help to learn of course