aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--README.md18
1 files changed, 1 insertions, 17 deletions
diff --git a/README.md b/README.md
index fc7ab6e..20781e3 100644
--- a/README.md
+++ b/README.md
@@ -25,24 +25,8 @@ First, clone the submodule with `git submodule init && git submodule update --re
`mkbinder.py` is a Python script to download web pages. It can detect if the latest version of a website is newer than the downloaded version and download only the web pages that need to be updated.
-Navigate to the folder in your terminal and run `pip install -r requirements.txt` to install dependencies.
-
-Add links to download to text files in the `Links` folder. The links in each text file will be downloaded to their own folder.
+First, install dependencies with `pip install -r requirements.txt`. Add links to download to text files in the `Links` folder. The links in each text file will be downloaded to their own folder.
Now run `python mkbinder.py`. It may take a while, depending on how many links to download.
-Spam Ctrl-C in your terminal to quit the script while it's running.
-
You can change the backend used to download the websites, but the default backend `pdfkit` tends to work the best and is recommended. There is an alternative backend `weasyprint` which may work better in some cases and does not require wkhtmltopdf. Only change the backend if you are having trouble with the default and you know what you are doing!
-
-```
-usage: mkbinder.py [-h] [--backend {pdfkit,weasyprint}] [--force {False,True}]
-
-optional arguments:
- -h, --help show this help message and exit
- --backend {pdfkit,weasyprint}, -b {pdfkit,weasyprint}
- change the download backend; default: pdfkit
- --force {False,True}, -f {False,True}
- force download all links instead of only the ones that need to be updated; default: False
-```
-