wget is a useful command for mirroring web pages. A simple use is to get a single webpage or link:
For the intent of mirroring, one might use:
wget -kmp www.lagrange.physics.drexel.edu
The following are useful options for mirroring (these are quotes from the wget man page):
---From man wget --- r --recursive Turn on recursive retrieving. -k --convert-links After the download is complete, convert the links in the document to make them suitable for local viewing. This affects not only the visible hyperlinks, but any part of the document that links to external content, such as embedded images, links to style sheets, hyperlinks to non-HTML content, etc. Each link will be changed in one of the two ways: * The links to files that have been downloaded by Wget will be changed to refer to the file they point to as a relative link. Example: if the downloaded file /foo/doc.html links to /bar/img.gif, also downloaded, then the link in doc.html will be modified to point to ../bar/img.gif. This kind of transformation works reliably for arbitrary combinations of directories. * The links to files that have not been downloaded by Wget will be changed to include host name and absolute path of the location they point to. Example: if the downloaded file /foo/doc.html links to /bar/img.gif (or to ../bar/img.gif), then the link in doc.html will be modified to point to http://hostname/bar/img.gif. Because of this, local browsing works reliably: if a linked file was downloaded, the link will refer to its local name; if it was not downloaded, the link will refer to its full Internet address rather than presenting a broken link. The fact that the former links are converted to relative links ensures that you can move the downloaded hierarchy to another directory. Note that only at the end of the download can Wget know which links have been downloaded. Because of that, the work done by -k will be performed at the end of all the downloads. -m --mirror Turn on options suitable for mirroring. This option turns on recursion and time-stamping, sets infinite recursion depth and keeps FTP directory listings. It is currently equivalent to -r -N -l inf --no-remove-listing. -p --page-requisites This option causes Wget to download all the files that are necessary to properly display a given HTML page. This includes such things as inlined images, sounds, and referenced stylesheets. Ordinarily, when downloading a single HTML page, any requisite documents that may be needed to display it properly are not downloaded. Using -r together with -l can help, but since Wget does not ordinarily distinguish between external and inlined documents, one is generally left with ''leaf documents'' that are missing their requisites. For instance, say document 1.html contains an "<IMG>" tag referencing 1.gif and an "<A>" tag pointing to external document 2.html. Say that 2.html is similar but that its image is 2.gif and it links to 3.html. Say this continues up to some arbitrarily high number. If one executes the command: wget -r -l 2 http://<site>/1.html then 1.html, 1.gif, 2.html, 2.gif, and 3.html will be downloaded. As you can see, 3.html is without its requisite 3.gif because Wget is simply counting the number of hops (up to 2) away from 1.html in order to determine where to stop the recursion. However, with this command: wget -r -l 2 -p http://<site>/1.html all the above files and 3.html's requisite 3.gif will be downloaded. Similarly, wget -r -l 1 -p http://<site>/1.html will cause 1.html, 1.gif, 2.html, and 2.gif to be downloaded. One might think that: wget -r -l 0 -p http://<site>/1.html would download just 1.html and 1.gif, but unfortunately this is not the case, because -l 0 is equivalent to -l inf---that is, infinite recursion. To download a single HTML page (or a handful of them, all specified on the command-line or in a -i URL input file) and its (or their) requisites, simply leave off -r and -l: wget -p http://<site>/1.html Note that Wget will behave as if -r had been specified, but only that single page and its requisites will be downloaded. Links from that page to external documents will not be followed. Actually, to download a single page and all its requisites (even if they exist on separate websites), and make sure the lot displays properly locally, this author likes to use a few options in addition to -p: wget -E -H -k -K -p http://<site>/<document> To finish off this topic, it's worth knowing that Wget's idea of an external document link is any URL specified in an "<A>" tag, an "<AREA>" tag, or a "<LINK>" tag other than "<LINK REL="stylesheet">". -E --html-extension If a file of type application/xhtml+xml or text/html is downloaded and the URL does not end with the regexp \.[Hh][Tt][Mm][Ll]?, this option will cause the suffix .html to be appended to the local filename. This is useful, for instance, when you're mirroring a remote site that uses .asp pages, but you want the mirrored pages to be viewable on your stock Apache server. Another good use for this is when you're downloading CGI-generated materials. A URL like http://site.com/article.cgi?25 will be saved as article.cgi?25.html. Note that filenames changed in this way will be re-downloaded every time you re-mirror a site, because Wget can't tell that the local X.html file corresponds to remote URL X (since it doesn't yet know that the URL produces output of type text/html or application/xhtml+xml. To prevent this re-downloading, you must use -k and -K so that the original version of the file will be saved as X.orig. -H --span-hosts Enable spanning across hosts when doing recursive retrieving. -L --relative Follow relative links only. Useful for retrieving a specific home page without any distractions, not even those from the same hosts.
Note: Crossman was mirroring a page with wget and for some yet to be revealed reason, wget was not retrieving image files from one particular directory. His solution was to grep the file extension in the particular file (which had been retrieved) which calls for all these images--the lines were of the form "blah blah blah background-image:url(image.png); blah blah blah" so he piped that into sed and stripped off everything except the name of the image file, which is then fed to wget for retrieval.
for i in `grep png nld.css | sed -e 's/[^ ]*(//' -e 's/)[^ ]*//'`; do wget lagrange.physics.drexel.edu/css/$i; done
sms lucu sms lucu gokil sms cinta romantis sms selamat malam sms selamat malam romantis sms ungkapan cinta sms ungkapan rasa cinta sms selamat malam lucu sms selamat ulang tahun sms patah hati sms sakit hati kata cinta sedih kumpulan sms gokil waptrick trans 7 online rcti online sctv online mnc tv online indosiar online tv online indonesia