WGET(1) GNU Wget WGET(1)
NAME
Wget - The non-interactive network downloader.
SYNOPSIS
wget [option]... [URL]...
DESCRIPTION
GNU Wget is a free utility for non-interactive download of files from
the Web. It supports HTTP, HTTPS, and FTP protocols, as well as
retrieval through HTTP proxies.
Wget is non-interactive, meaning that it can work in the background,
while the user is not logged on. This allows you to start a retrieval
and disconnect from the system, letting Wget finish the work. By con-
trast, most of the Web browsers require constant user's presence,
which can be a great hindrance when transferring a lot of data.
Wget can follow links in HTML and XHTML pages and create local ver-
sions of remote web sites, fully recreating the directory structure of
the original site. This is sometimes referred to as ''recursive down-
loading.'' While doing that, Wget respects the Robot Exclusion Stan-
dard (/robots.txt). Wget can be instructed to convert the links in
downloaded HTML files to the local files for offline viewing.
Wget has been designed for robustness over slow or unstable network
connections; if a download fails due to a network problem, it will
keep retrying until the whole file has been retrieved. If the server
supports regetting, it will instruct the server to continue the down-
load from where it left off.
OPTIONS
Basic Startup Options
-V
--version
Display the version of Wget.
-h
--help
Print a help message describing all of Wget's command-line
options.
-b
--background
Go to background immediately after startup. If no output file is
specified via the -o, output is redirected to wget-log.
-e command
--execute command
Execute command as if it were a part of .wgetrc. A command thus
invoked will be executed after the commands in .wgetrc, thus tak-
ing precedence over them. If you need to specify more than one
wgetrc command, use multiple instances of -e.
Logging and Input File Options
-o logfile
--output-file=logfile
Log all messages to logfile. The messages are normally reported
to standard error.
-a logfile
--append-output=logfile
Append to logfile. This is the same as -o, only it appends to
logfile instead of overwriting the old log file. If logfile does
not exist, a new file is created.
-d
--debug
Turn on debug output, meaning various information important to the
developers of Wget if it does not work properly. Your system
administrator may have chosen to compile Wget without debug sup-
port, in which case -d will not work. Please note that compiling
with debug support is always safe---Wget compiled with the debug
support will not print any debug info unless requested with -d.
-q
--quiet
Turn off Wget's output.
-v
--verbose
Turn on verbose output, with all the available data. The default
output is verbose.
-nv
--non-verbose
Non-verbose output---turn off verbose without being completely
quiet (use -q for that), which means that error messages and basic
information still get printed.
-i file
--input-file=file
Read URLs from file, in which case no URLs need to be on the com-
mand line. If there are URLs both on the command line and in an
input file, those on the command lines will be the first ones to
be retrieved. The file need not be an HTML document (but no harm
if it is)---it is enough if the URLs are just listed sequentially.
However, if you specify --force-html, the document will be
regarded as html. In that case you may have problems with rela-
tive links, which you can solve either by adding "
" to the documents or by specifying --base=url on the
command line.
-F
--force-html
When input is read from a file, force it to be treated as an HTML
file. This enables you to retrieve relative links from existing
HTML files on your local disk, by adding "
" to
HTML, or using the --base command-line option.
-B URL
--base=URL
When used in conjunction with -F, prepends URL to relative links
in the file specified by -i.
Download Options
--bind-address=ADDRESS
When making client TCP/IP connections, "bind()" to ADDRESS on the
local machine. ADDRESS may be specified as a hostname or IP
address. This option can be useful if your machine is bound to
multiple IPs.
-t number
--tries=number
Set number of retries to number. Specify 0 or inf for infinite
retrying. The default is to retry 20 times, with the exception of
fatal errors like ''connection refused'' or ''not found'' (404),
which are not retried.
-O file
--output-document=file
The documents will not be written to the appropriate files, but
all will be concatenated together and written to file. If file
already exists, it will be overwritten. If the file is -, the
documents will be written to standard output.
-nc
--no-clobber
If a file is downloaded more than once in the same directory,
Wget's behavior depends on a few options, including -nc. In cer-
tain cases, the local file will be clobbered, or overwritten, upon
repeated download. In other cases it will be preserved.
When running Wget without -N, -nc, or -r, downloading the same
file in the same directory will result in the original copy of
file being preserved and the second copy being named file.1. If
that file is downloaded yet again, the third copy will be named
file.2, and so on. When -nc is specified, this behavior is sup-
pressed, and Wget will refuse to download newer copies of file.
Therefore, ''"no-clobber"'' is actually a misnomer in this
mode---it's not clobbering that's prevented (as the numeric suf-
fixes were already preventing clobbering), but rather the multiple
version saving that's prevented.
When running Wget with -r, but without -N or -nc, re-downloading a
file will result in the new copy simply overwriting the old.
Adding -nc will prevent this behavior, instead causing the origi-
nal version to be preserved and any newer copies on the server to
be ignored.
When running Wget with -N, with or without -r, the decision as to
whether or not to download a newer copy of a file depends on the
local and remote timestamp and size of the file. -nc may not be
specified at the same time as -N.
Note that when -nc is specified, files with the suffixes .html or
.htm will be loaded from the local disk and parsed as if they had
been retrieved from the Web.
-c
--continue
Continue getting a partially-downloaded file. This is useful when
you want to finish up a download started by a previous instance of
Wget, or by another program. For instance:
wget -c ftp://sunsite.doc.ic.ac.uk/ls-lR.Z
If there is a file named ls-lR.Z in the current directory, Wget
will assume that it is the first portion of the remote file, and
will ask the server to continue the retrieval from an offset equal
to the length of the local file.
Note that you don't need to specify this option if you just want
the current invocation of Wget to retry downloading a file should
the connection be lost midway through. This is the default behav-
ior. -c only affects resumption of downloads started prior to
this invocation of Wget, and whose local files are still sitting
around.
Without -c, the previous example would just download the remote
file to ls-lR.Z.1, leaving the truncated ls-lR.Z file alone.
Beginning with Wget 1.7, if you use -c on a non-empty file, and it
turns out that the server does not support continued downloading,
Wget will refuse to start the download from scratch, which would
effectively ruin existing contents. If you really want the down-
load to start from scratch, remove the file.
Also beginning with Wget 1.7, if you use -c on a file which is of
equal size as the one on the server, Wget will refuse to download
the file and print an explanatory message. The same happens when
the file is smaller on the server than locally (presumably because
it was changed on the server since your last download
attempt)---because ''continuing'' is not meaningful, no download
occurs.
On the other side of the coin, while using -c, any file that's
bigger on the server than locally will be considered an incomplete
download and only "(length(remote) - length(local))" bytes will be
downloaded and tacked onto the end of the local file. This behav-
ior can be desirable in certain cases---for instance, you can use
wget -c to download just the new portion that's been appended to a
data collection or log file.
However, if the file is bigger on the server because it's been
changed, as opposed to just appended to, you'll end up with a gar-
bled file. Wget has no way of verifying that the local file is
really a valid prefix of the remote file. You need to be espe-
cially careful of this when using -c in conjunction with -r, since
every file will be considered as an "incomplete download" candi-
date.
Another instance where you'll get a garbled file if you try to use
-c is if you have a lame HTTP proxy that inserts a ''transfer
interrupted'' string into the local file. In the future a ''roll-
back'' option may be added to deal with this case.
Note that -c only works with FTP servers and with HTTP servers
that support the "Range" header.
--progress=type
Select the type of the progress indicator you wish to use. Legal
indicators are ''dot'' and ''bar''.
The ''bar'' indicator is used by default. It draws an ASCII
progress bar graphics (a.k.a ''thermometer'' display) indicating
the status of retrieval. If the output is not a TTY, the ''dot''
bar will be used by default.
Use --progress=dot to switch to the ''dot'' display. It traces
the retrieval by printing dots on the screen, each dot represent-
ing a fixed amount of downloaded data.
When using the dotted retrieval, you may also set the style by
specifying the type as dot:style. Different styles assign differ-
ent meaning to one dot. With the "default" style each dot repre-
sents 1K, there are ten dots in a cluster and 50 dots in a line.
The "binary" style has a more ''computer''-like orientation---8K
dots, 16-dots clusters and 48 dots per line (which makes for 384K
lines). The "mega" style is suitable for downloading very large
files---each dot represents 64K retrieved, there are eight dots in
a cluster, and 48 dots on each line (so each line contains 3M).
Note that you can set the default style using the "progress"
command in .wgetrc. That setting may be overridden from the com-
mand line. The exception is that, when the output is not a TTY,
the ''dot'' progress will be favored over ''bar''. To force the
bar output, use --progress=bar:force.
-N
--timestamping
Turn on time-stamping.
-S
--server-response
Print the headers sent by HTTP servers and responses sent by FTP
servers.
--spider
When invoked with this option, Wget will behave as a Web spider,
which means that it will not download the pages, just check that
they are there. For example, you can use Wget to check your book-
marks:
wget --spider --force-html -i bookmarks.html
This feature needs much more work for Wget to get close to the
functionality of real web spiders.
-T seconds
--timeout=seconds
Set the network timeout to seconds seconds. This is equivalent to
specifying --dns-timeout, --connect-timeout, and --read-timeout,
all at the same time.
Whenever Wget connects to or reads from a remote host, it checks
for a timeout and aborts the operation if the time expires. This
prevents anomalous occurrences such as hanging reads or infinite
connects. The only timeout enabled by default is a 900-second
timeout for reading. Setting timeout to 0 disables checking for
timeouts.
Unless you know what you are doing, it is best not to set any of
the timeout-related options.
--dns-timeout=seconds
Set the DNS lookup timeout to seconds seconds. DNS lookups that
don't complete within the specified time will fail. By default,
there is no timeout on DNS lookups, other than that implemented by
system libraries.
--connect-timeout=seconds
Set the connect timeout to seconds seconds. TCP connections that
take longer to establish will be aborted. By default, there is no
connect timeout, other than that implemented by system libraries.
--read-timeout=seconds
Set the read (and write) timeout to seconds seconds. Reads that
take longer will fail. The default value for read timeout is 900
seconds.
--limit-rate=amount
Limit the download speed to amount bytes per second. Amount may
be expressed in bytes, kilobytes with the k suffix, or megabytes
with the m suffix. For example, --limit-rate=20k will limit the
retrieval rate to 20KB/s. This kind of thing is useful when, for
whatever reason, you don't want Wget to consume the entire avail-
able bandwidth.
Note that Wget implements the limiting by sleeping the appropriate
amount of time after a network read that took less time than spec-
ified by the rate. Eventually this strategy causes the TCP trans-
fer to slow down to approximately the specified rate. However, it
may take some time for this balance to be achieved, so don't be
surprised if limiting the rate doesn't work well with very small
files.
-w seconds
--wait=seconds
Wait the specified number of seconds between the retrievals. Use
of this option is recommended, as it lightens the server load by
making the requests less frequent. Instead of in seconds, the
time can be specified in minutes using the "m" suffix, in hours
using "h" suffix, or in days using "d" suffix.
Specifying a large value for this option is useful if the network
or the destination host is down, so that Wget can wait long enough
to reasonably expect the network error to be fixed before the
retry.
--waitretry=seconds
If you don't want Wget to wait between every retrieval, but only
between retries of failed downloads, you can use this option.
Wget will use linear backoff, waiting 1 second after the first
failure on a given file, then waiting 2 seconds after the second
failure on that file, up to the maximum number of seconds you
specify. Therefore, a value of 10 will actually make Wget wait up
to (1 + 2 + ... + 10) = 55 seconds per file.
Note that this option is turned on by default in the global wgetrc
file.
--random-wait
Some web sites may perform log analysis to identify retrieval pro-
grams such as Wget by looking for statistically significant simi-
larities in the time between requests. This option causes the time
between requests to vary between 0 and 2 * wait seconds, where
wait was specified using the --wait option, in order to mask
Wget's presence from such analysis.
A recent article in a publication devoted to development on a pop-
ular consumer platform provided code to perform this analysis on
the fly. Its author suggested blocking at the class C address
level to ensure automated retrieval programs were blocked despite
changing DHCP-supplied addresses.
The --random-wait option was inspired by this ill-advised recom-
mendation to block many unrelated users from a web site due to the
actions of one.
-Y on/off
--proxy=on/off
Turn proxy support on or off. The proxy is on by default if the
appropriate environment variable is defined.
For more information about the use of proxies with Wget,
-Q quota
--quota=quota
Specify download quota for automatic retrievals. The value can be
specified in bytes (default), kilobytes (with k suffix), or
megabytes (with m suffix).
Note that quota will never affect downloading a single file. So
if you specify wget -Q10k ftp://wuarchive.wustl.edu/ls-lR.gz, all
of the ls-lR.gz will be downloaded. The same goes even when
several URLs are specified on the command-line. However, quota is
respected when retrieving either recursively, or from an input
file. Thus you may safely type wget -Q2m -i sites---download will
be aborted when the quota is exceeded.
Setting quota to 0 or to inf unlimits the download quota.
--dns-cache=off
Turn off caching of DNS lookups. Normally, Wget remembers the
addresses it looked up from DNS so it doesn't have to repeatedly
contact the DNS server for the same (typically small) set of
addresses it retrieves from. This cache exists in memory only; a
new Wget run will contact DNS again.
However, in some cases it is not desirable to cache host names,
even for the duration of a short-running application like Wget.
For example, some HTTP servers are hosted on machines with dynami-
cally allocated IP addresses that change from time to time. Their
DNS entries are updated along with each change. When Wget's down-
load from such a host gets interrupted by IP address change, Wget
retries the download, but (due to DNS caching) it contacts the old
address. With the DNS cache turned off, Wget will repeat the DNS
lookup for every connect and will thus get the correct dynamic
address every time---at the cost of additional DNS lookups where
they're probably not needed.
If you don't understand the above description, you probably won't
need this option.
--restrict-file-names=mode
Change which characters found in remote URLs may show up in local
file names generated from those URLs. Characters that are
restricted by this option are escaped, i.e. replaced with %HH,
where HH is the hexadecimal number that corresponds to the
restricted character.
By default, Wget escapes the characters that are not valid as part
of file names on your operating system, as well as control charac-
ters that are typically unprintable. This option is useful for
changing these defaults, either because you are downloading to a
non-native partition, or because you want to disable escaping of
the control characters.
When mode is set to ''unix'', Wget escapes the character / and the
control characters in the ranges 0--31 and 128--159. This is the
default on Unix-like OS'es.
When mode is set to ''windows'', Wget escapes the characters \, |,
/, :, ?, ", *, <, >, and the control characters in the ranges
0--31 and 128--159. In addition to this, Wget in Windows mode
uses + instead of : to separate host and port in local file names,
and uses @ instead of ? to separate the query portion of the file
name from the rest. Therefore, a URL that would be saved as
www.xemacs.org:4300/search.pl?input=blah in Unix mode would be
saved as www.xemacs.org+4300/search.pl@input=blah in Windows mode.
This mode is the default on Windows.
If you append ,nocontrol to the mode, as in unix,nocontrol, escap-
ing of the control characters is also switched off. You can use
--restrict-file-names=nocontrol to turn off escaping of control
characters without affecting the choice of the OS to use as file
name restriction mode.
Directory Options
-nd
--no-directories
Do not create a hierarchy of directories when retrieving recur-
sively. With this option turned on, all files will get saved to
the current directory, without clobbering (if a name shows up more
than once, the filenames will get extensions .n).
-x
--force-directories
The opposite of -nd---create a hierarchy of directories, even if
one would not have been created otherwise. E.g. wget -x
https://fly.srk.fer.hr/robots.txt will save the downloaded file to
fly.srk.fer.hr/robots.txt.
-nH
--no-host-directories
Disable generation of host-prefixed directories. By default,
invoking Wget with -r https://fly.srk.fer.hr/ will create a struc-
ture of directories beginning with fly.srk.fer.hr/. This option
disables such behavior.
--protocol-directories
Use the protocol name as a directory component of local file
names. For example, with this option, wget -r https://host will
save to http/host/... rather than just to host/....
Disable generation of host-prefixed directories. By default,
invoking Wget with -r https://fly.srk.fer.hr/ will create a struc-
ture of directories beginning with fly.srk.fer.hr/. This option
disables such behavior.
--cut-dirs=number
Ignore number directory components. This is useful for getting a
fine-grained control over the directory where recursive retrieval
will be saved.
Take, for example, the directory at
ftp://ftp.xemacs.org/pub/xemacs/. If you retrieve it with -r, it
will be saved locally under ftp.xemacs.org/pub/xemacs/. While the
-nH option can remove the ftp.xemacs.org/ part, you are still
stuck with pub/xemacs. This is where --cut-dirs comes in handy;
it makes Wget not ''see'' number remote directory components.
Here are several examples of how --cut-dirs option works.
No options -> ftp.xemacs.org/pub/xemacs/
-nH -> pub/xemacs/
-nH --cut-dirs=1 -> xemacs/
-nH --cut-dirs=2 -> .
--cut-dirs=1 -> ftp.xemacs.org/xemacs/
...
If you just want to get rid of the directory structure, this
option is similar to a combination of -nd and -P. However, unlike
-nd, --cut-dirs does not lose with subdirectories---for instance,
with -nH --cut-dirs=1, a beta/ subdirectory will be placed to
xemacs/beta, as one would expect.
-P prefix
--directory-prefix=prefix
Set directory prefix to prefix. The directory prefix is the
directory where all other files and subdirectories will be saved
to, i.e. the top of the retrieval tree. The default is . (the
current directory).
HTTP Options
-E
--html-extension
If a file of type application/xhtml+xml or text/html is downloaded
and the URL does not end with the regexp \.[Hh][Tt][Mm][Ll]?, this
option will cause the suffix .html to be appended to the local
filename. This is useful, for instance, when you're mirroring a
remote site that uses .asp pages, but you want the mirrored pages
to be viewable on your stock Apache server. Another good use for
this is when you're downloading CGI-generated materials. A URL
like https://site.com/article.cgi?25 will be saved as arti-
cle.cgi?25.html.
Note that filenames changed in this way will be re-downloaded
every time you re-mirror a site, because Wget can't tell that the
local X.html file corresponds to remote URL X (since it doesn't
yet know that the URL produces output of type text/html or appli-
cation/xhtml+xml. To prevent this re-downloading, you must use -k
and -K so that the original version of the file will be saved as
X.orig.
--http-user=user
--http-passwd=password
Specify the username user and password password on an HTTP server.
According to the type of the challenge, Wget will encode them
using either the "basic" (insecure) or the "digest" authentication
scheme.
Another way to specify username and password is in the URL itself.
Either method reveals your password to anyone who bothers to run
"ps". To prevent the passwords from being seen, store them in
.wgetrc or .netrc, and make sure to protect those files from other
users with "chmod". If the passwords are really important, do not
leave them lying in those files either---edit the files and delete
them after Wget has started the download.
For more information about security issues with Wget,
--no-cache
Disable server-side cache. In this case, Wget will send the
remote server an appropriate directive (Pragma: no-cache) to get
the file from the remote service, rather than returning the cached
version. This is especially useful for retrieving and flushing
out-of-date documents on proxy servers.
Caching is allowed by default.
--no-cookies
Disable the use of cookies. Cookies are a mechanism for maintain-
ing server-side state. The server sends the client a cookie using
the "Set-Cookie" header, and the client responds with the same
cookie upon further requests. Since cookies allow the server own-
ers to keep track of visitors and for sites to exchange this
information, some consider them a breach of privacy. The default
is to use cookies; however, storing cookies is not on by default.
--load-cookies file
Load cookies from file before the first HTTP retrieval. file is a
textual file in the format originally used by Netscape's cook-
ies.txt file.
You will typically use this option when mirroring sites that
require that you be logged in to access some or all of their con-
tent. The login process typically works by the web server issuing
an HTTP cookie upon receiving and verifying your credentials. The
cookie is then resent by the browser when accessing that part of
the site, and so proves your identity.
Mirroring such a site requires Wget to send the same cookies your
browser sends when communicating with the site. This is achieved
by --load-cookies---simply point Wget to the location of the cook-
ies.txt file, and it will send the same cookies your browser would
send in the same situation. Different browsers keep textual
cookie files in different locations:
The cookies are in ~/.netscape/cookies.txt.
Mozilla's cookie file is also named cookies.txt, located some-
where under ~/.mozilla, in the directory of your profile. The
full path usually ends up looking somewhat like
~/.mozilla/default/some-weird-string/cookies.txt.
You can produce a cookie file Wget can use by using the File
menu, Import and Export, Export Cookies. This has been tested
with Internet Explorer 5; it is not guaranteed to work with
earlier versions.
If you are using a different browser to create your cookies,
--load-cookies will only work if you can locate or produce a
cookie file in the Netscape format that Wget expects.
If you cannot use --load-cookies, there might still be an alterna-
tive. If your browser supports a ''cookie manager'', you can use
it to view the cookies used when accessing the site you're mirror-
ing. Write down the name and value of the cookie, and manually
instruct Wget to send those cookies, bypassing the ''official''
cookie support:
wget --cookies=off --header "Cookie: ="
--save-cookies file
Save cookies to file before exiting. This will not save cookies
that have expired or that have no expiry time (so-called ''session
cookies''), but also see --keep-session-cookies.
--keep-session-cookies
When specified, causes --save-cookies to also save session cook-
ies. Session cookies are normally not save because they are sup-
posed to be forgotten when you exit the browser. Saving them is
useful on sites that require you to log in or to visit the home
page before you can access some pages. With this option, multiple
Wget runs are considered a single browser session as far as the
site is concerned.
Since the cookie file format does not normally carry session cook-
ies, Wget marks them with an expiry timestamp of 0. Wget's
--load-cookies recognizes those as session cookies, but it might
confuse other browsers. Also note that cookies so loaded will be
treated as other session cookies, which means that if you want
--save-cookies to preserve them again, you must use --keep-ses-
sion-cookies again.
--ignore-length
Unfortunately, some HTTP servers (CGI programs, to be more pre-
cise) send out bogus "Content-Length" headers, which makes Wget go
wild, as it thinks not all the document was retrieved. You can
spot this syndrome if Wget retries getting the same document again
and again, each time claiming that the (otherwise normal) connec-
tion has closed on the very same byte.
With this option, Wget will ignore the "Content-Length"
header---as if it never existed.
--header=additional-header
Define an additional-header to be passed to the HTTP servers.
Headers must contain a : preceded by one or more non-blank charac-
ters, and must not contain newlines.
You may define more than one additional header by specifying
--header more than once.
wget --header='Accept-Charset: iso-8859-2' \
--header='Accept-Language: hr' \
https://fly.srk.fer.hr/
Specification of an empty string as the header value will clear
all previous user-defined headers.
--proxy-user=user
--proxy-passwd=password
Specify the username user and password password for authentication
on a proxy server. Wget will encode them using the "basic"
authentication scheme.
Security considerations similar to those with --http-passwd per-
tain here as well.
--referer=url
Include 'Referer: url' header in HTTP request. Useful for
retrieving documents with server-side processing that assume they
are always being retrieved by interactive web browsers and only
come out properly when Referer is set to one of the pages that
point to them.
--save-headers
Save the headers sent by the HTTP server to the file, preceding
the actual contents, with an empty line as the separator.
-U agent-string
--user-agent=agent-string
Identify as agent-string to the HTTP server.
The HTTP protocol allows the clients to identify themselves using
a "User-Agent" header field. This enables distinguishing the WWW
software, usually for statistical purposes or for tracing of pro-
tocol violations. Wget normally identifies as Wget/version, ver-
sion being the current version number of Wget.
However, some sites have been known to impose the policy of tai-
loring the output according to the "User-Agent"-supplied informa-
tion. While conceptually this is not such a bad idea, it has been
abused by servers denying information to clients other than
"Mozilla" or Microsoft "Internet Explorer". This option allows
you to change the "User-Agent" line issued by Wget. Use of this
option is discouraged, unless you really know what you are doing.
--post-data=string
--post-file=file
Use POST as the method for all HTTP requests and send the speci-
fied data in the request body. "--post-data" sends string as
data, whereas "--post-file" sends the contents of file. Other
than that, they work in exactly the same way.
Please be aware that Wget needs to know the size of the POST data
in advance. Therefore the argument to "--post-file" must be a
regular file; specifying a FIFO or something like /dev/stdin won't
work. It's not quite clear how to work around this limitation
inherent in HTTP/1.0. Although HTTP/1.1 introduces chunked trans-
fer that doesn't require knowing the request length in advance, a
client can't use chunked unless it knows it's talking to an
HTTP/1.1 server. And it can't know that until it receives a
response, which in turn requires the request to have been com-
pleted -- a chicken-and-egg problem.
Note: if Wget is redirected after the POST request is completed,
it will not send the POST data to the redirected URL. This is
because URLs that process POST often respond with a redirection to
a regular page (although that's technically disallowed), which
does not desire or accept POST. It is not yet clear that this
behavior is optimal; if it doesn't work out, it will be changed.
This example shows how to log to a server using POST and then pro-
ceed to download the desired pages, presumably only accessible to
authorized users:
# Log in to the server. This can be done only once.
wget --save-cookies cookies.txt \
--post-data 'user=foo&password=bar' \
https://server.com/auth.php
# Now grab the page or pages we care about.
wget --load-cookies cookies.txt \
-p https://server.com/interesting/article.php
FTP Options
--no-remove-listing
Don't remove the temporary .listing files generated by FTP
retrievals. Normally, these files contain the raw directory list-
ings received from FTP servers. Not removing them can be useful
for debugging purposes, or when you want to be able to easily
check on the contents of remote server directories (e.g. to verify
that a mirror you're running is complete).
Note that even though Wget writes to a known filename for this
file, this is not a security hole in the scenario of a user making
.listing a symbolic link to /etc/passwd or something and asking
"root" to run Wget in his or her directory. Depending on the
options used, either Wget will refuse to write to .listing, making
the globbing/recursion/time-stamping operation fail, or the sym-
bolic link will be deleted and replaced with the actual .listing
file, or the listing will be written to a .listing.number file.
Even though this situation isn't a problem, though, "root" should
never run Wget in a non-trusted user's directory. A user could do
something as simple as linking index.html to /etc/passwd and ask-
ing "root" to run Wget with -N or -r so the file will be overwrit-
ten.
--no-glob
Turn off FTP globbing. Globbing refers to the use of shell-like
special characters (wildcards), like *, ?, [ and ] to retrieve
more than one file from the same directory at once, like:
wget ftp://gnjilux.srk.fer.hr/*.msg
By default, globbing will be turned on if the URL contains a glob-
bing character. This option may be used to turn globbing on or
off permanently.
You may have to quote the URL to protect it from being expanded by
your shell. Globbing makes Wget look for a directory listing,
which is system-specific. This is why it currently works only
with Unix FTP servers (and the ones emulating Unix "ls" output).
--passive-ftp
Use the passive FTP retrieval scheme, in which the client initi-
ates the data connection. This is sometimes required for FTP to
work behind firewalls.
--retr-symlinks
Usually, when retrieving FTP directories recursively and a sym-
bolic link is encountered, the linked-to file is not downloaded.
Instead, a matching symbolic link is created on the local filesys-
tem. The pointed-to file will not be downloaded unless this
recursive retrieval would have encountered it separately and down-
loaded it anyway.
When --retr-symlinks is specified, however, symbolic links are
traversed and the pointed-to files are retrieved. At this time,
this option does not cause Wget to traverse symlinks to directo-
ries and recurse through them, but in the future it should be
enhanced to do this.
Note that when retrieving a file (not a directory) because it was
specified on the command-line, rather than because it was recursed
to, this option has no effect. Symbolic links are always tra-
versed in this case.
--no-http-keep-alive
Turn off the ''keep-alive'' feature for HTTP downloads. Normally,
Wget asks the server to keep the connection open so that, when you
download more than one document from the same server, they get
transferred over the same TCP connection. This saves time and at
the same time reduces the load on the server.
This option is useful when, for some reason, persistent
(keep-alive) connections don't work for you, for example due to a
server bug or due to the inability of server-side scripts to cope
with the connections.
Recursive Retrieval Options
-r
--recursive
Turn on recursive retrieving.
-l depth
--level=depth
Specify recursion maximum depth level depth. The default maximum
depth is 5.
--delete-after
This option tells Wget to delete every single file it downloads,
after having done so. It is useful for pre-fetching popular pages
through a proxy, e.g.:
wget -r -nd --delete-after https://whatever.com/~popular/page/
The -r option is to retrieve recursively, and -nd to not create
directories.
Note that --delete-after deletes files on the local machine. It
does not issue the DELE command to remote FTP sites, for instance.
Also note that when --delete-after is specified, --convert-links
is ignored, so .orig files are simply not created in the first
place.
-k
--convert-links
After the download is complete, convert the links in the document
to make them suitable for local viewing. This affects not only
the visible hyperlinks, but any part of the document that links to
external content, such as embedded images, links to style sheets,
hyperlinks to non-HTML content, etc.
Each link will be changed in one of the two ways:
* The links to files that have been downloaded by Wget will be
changed to refer to the file they point to as a relative link.
Example: if the downloaded file /foo/doc.html links to
/bar/img.gif, also downloaded, then the link in doc.html will
be modified to point to ../bar/img.gif. This kind of trans-
formation works reliably for arbitrary combinations of direc-
tories.
* The links to files that have not been downloaded by Wget will
be changed to include host name and absolute path of the loca-
tion they point to.
Example: if the downloaded file /foo/doc.html links to
/bar/img.gif (or to ../bar/img.gif), then the link in doc.html
will be modified to point to https://hostname/bar/img.gif.
Because of this, local browsing works reliably: if a linked file
was downloaded, the link will refer to its local name; if it was
not downloaded, the link will refer to its full Internet address
rather than presenting a broken link. The fact that the former
links are converted to relative links ensures that you can move
the downloaded hierarchy to another directory.
Note that only at the end of the download can Wget know which
links have been downloaded. Because of that, the work done by -k
will be performed at the end of all the downloads.
-K
--backup-converted
When converting a file, back up the original version with a .orig
suffix. Affects the behavior of -N.
-m
--mirror
Turn on options suitable for mirroring. This option turns on
recursion and time-stamping, sets infinite recursion depth and
keeps FTP directory listings. It is currently equivalent to -r -N
-l inf --no-remove-listing.
-p
--page-requisites
This option causes Wget to download all the files that are neces-
sary to properly display a given HTML page. This includes such
things as inlined images, sounds, and referenced stylesheets.
Ordinarily, when downloading a single HTML page, any requisite
documents that may be needed to display it properly are not down-
loaded. Using -r together with -l can help, but since Wget does
not ordinarily distinguish between external and inlined documents,
one is generally left with ''leaf documents'' that are missing
their requisites.
For instance, say document 1.html contains an "" tag refer-
encing 1.gif and an "" tag pointing to external document
2.html. Say that 2.html is similar but that its image is 2.gif
and it links to 3.html. Say this continues up to some arbitrarily
high number.
If one executes the command:
wget -r -l 2 https:///1.html then 1.html, 1.gif, 2.html, 2.gif, and 3.html will be downloaded. As you can see, 3.html is without its requisite 3.gif because Wget is simply counting the number of hops (up to 2) away from 1.html in order to determine where to stop the recursion. However, with this command: wget -r -l 2 -p https:///1.html all the above files and 3.html's requisite 3.gif will be down- loaded. Similarly, wget -r -l 1 -p https:///1.html will cause 1.html, 1.gif, 2.html, and 2.gif to be downloaded. One might think that: wget -r -l 0 -p https:///1.html would download just 1.html and 1.gif, but unfortunately this is not the case, because -l 0 is equivalent to -l inf---that is, infinite recursion. To download a single HTML page (or a handful of them, all specified on the command-line or in a -i URL input file) and its (or their) requisites, simply leave off -r and -l: wget -p https:///1.html Note that Wget will behave as if -r had been specified, but only that single page and its requisites will be downloaded. Links from that page to external documents will not be followed. Actu- ally, to download a single page and all its requisites (even if they exist on separate websites), and make sure the lot displays properly locally, this author likes to use a few options in addi- tion to -p: wget -E -H -k -K -p https:/// To finish off this topic, it's worth knowing that Wget's idea of an external document link is any URL specified in an "" tag, an
"
" tag, or a "
" tag other than "
".
--strict-comments
Turn on strict parsing of HTML comments. The default is to termi-
nate comments at the first occurrence of -->.
According to specifications, HTML comments are expressed as SGML
declarations. Declaration is special markup that begins with , such as
, that may contain comments between a pair of -- delimiters. HTML comments are ''empty decla- rations'', SGML declarations without any non-comment text. There- fore, is a valid comment, and so is , but is not. On the other hand, most HTML writers don't perceive comments as anything other than text delimited with , which is not quite the same. For example, something like works as a valid comment as long as the number of dashes is a multiple of four (!). If not, the comment technically lasts until the next --, which may be at the other end of the document. Because of this, many popular browsers completely ignore the specification and implement what users have come to expect: comments delimited with . Until version 1.9, Wget interpreted comments strictly, which resulted in missing links in many web pages that displayed fine in browsers, but had the misfortune of containing non-compliant com- ments. Beginning with version 1.9, Wget has joined the ranks of clients that implements ''naive'' comments, terminating each com- ment at the first occurrence of -->. If, for whatever reason, you want strict comment parsing, use this option to turn it on. Recursive Accept/Reject Options -A acclist --accept acclist -R rejlist --reject rejlist Specify comma-separated lists of file name suffixes or patterns to accept or reject. -D domain-list --domains=domain-list Set domains to be followed. domain-list is a comma-separated list of domains. Note that it does not turn on -H. --exclude-domains domain-list Specify the domains that are not to be followed.. --follow-ftp Follow FTP links from HTML documents. Without this option, Wget will ignore all the FTP links. --follow-tags=list Wget has an internal table of HTML tag / attribute pairs that it considers when looking for linked documents during a recursive retrieval. If a user wants only a subset of those tags to be con- sidered, however, he or she should be specify such tags in a comma-separated list with this option. --ignore-tags=list This is the opposite of the --follow-tags option. To skip certain HTML tags when recursively looking for documents to download, specify them in a comma-separated list. In the past, this option was the best bet for downloading a single page and its requisites, using a command-line like: wget --ignore-tags=a,area -H -k -K -r https:/// However, the author of this option came across a page with tags like "
" and came to the realization that specifying tags to ignore was not enough. One can't just tell Wget to ignore "
", because then stylesheets will not be downloaded. Now the best bet for downloading a single page and its requisites is the dedicated --page-requisites option. -H --span-hosts Enable spanning across hosts when doing recursive retrieving. -L --relative Follow relative links only. Useful for retrieving a specific home page without any distractions, not even those from the same hosts. -I list --include-directories=list Specify a comma-separated list of directories you wish to follow when downloading. Elements of list may contain wildcards. -X list --exclude-directories=list Specify a comma-separated list of directories you wish to exclude from download. Elements of list may contain wildcards. -np --no-parent Do not ever ascend to the parent directory when retrieving recur- sively. This is a useful option, since it guarantees that only the files below a certain hierarchy will be downloaded. FILES /usr/local/etc/wgetrc Default location of the global startup file. .wgetrc User startup file. BUGS You are welcome to send bug reports about GNU Wget to . Before actually submitting a bug report, please try to follow a few simple guidelines. 1. Please try to ascertain that the behavior you see really is a bug. If Wget crashes, it's a bug. If Wget does not behave as docu- mented, it's a bug. If things work strange, but you are not sure about the way they are supposed to work, it might well be a bug. 2. Try to repeat the bug in as simple circumstances as possible. E.g. if Wget crashes while downloading wget -rl0 -kKE -t5 -Y0 https://yoyodyne.com -o /tmp/log, you should try to see if the crash is repeatable, and if will occur with a simpler set of options. You might even try to start the download at the page where the crash occurred to see if that page somehow triggered the crash. Also, while I will probably be interested to know the contents of your .wgetrc file, just dumping it into the debug message is prob- ably a bad idea. Instead, you should first try to see if the bug repeats with .wgetrc moved out of the way. Only if it turns out that .wgetrc settings affect the bug, mail me the relevant parts of the file. 3. Please start Wget with -d option and send the log (or the relevant parts of it). If Wget was compiled without debug support, recom- pile it. It is much easier to trace bugs with debug support on. 4. If Wget has crashed, try to run it in a debugger, e.g. "gdb 'which wget' core" and type "where" to get the backtrace. SEE ALSO GNU Info entry for wget. AUTHOR Originally written by Hrvoje Niksic . COPYRIGHT Copyright (c) 1996, 1997, 1998, 2000, 2001, 2002, 2003 Free Software Foundation, Inc. Permission is granted to make and distribute verbatim copies of this manual provided the copyright notice and this permission notice are preserved on all copies. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.1 or any later version published by the Free Software Foundation; with the Invariant Sections being ''GNU General Public License'' and ''GNU Free Documentation License'', with no Front-Cover Texts, and with no Back- Cover Texts. A copy of the license is included in the section enti- tled ''GNU Free Documentation License''. GNU Wget 1.9+cvs-dev 2004-08-27 WGET(1)