aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--README.md8
-rw-r--r--bot.py26
2 files changed, 24 insertions, 10 deletions
diff --git a/README.md b/README.md
index f10a28f..f7f5725 100644
--- a/README.md
+++ b/README.md
@@ -2,15 +2,17 @@
Fediverse ebooks bot using neural networks
+
## Usage
-First, install Python dependencies with `pip install psycopg2 torch transformers datasets Mastodon.py`. Currently only Mastodon with PostgreSQL is supported, but it should be easy to add support for other platforms and databases.
+First, install Python dependencies using your distro's package manager or `pip`: [psycopg2](https://www.psycopg.org), [torch](https://pytorch.org/), [transformers](https://huggingface.co/docs/transformers/index), and [datasets](https://huggingface.co/docs/datasets/). Additionally, for Mastodon and Pleroma, install [Mastodon.py](https://mastodonpy.readthedocs.io/en/stable/), and for Misskey, install [Misskey.py](https://misskeypy.readthedocs.io/ja/latest/). If your database or platform isn't supported, don't worry! It's easy to add support for other platforms and databases, and contributions are welcome!
-Generate the training data from the Mastodon database using `python data.py -d 'dbname=test user=postgres password=secret'`. You can skip this step if you have training data from another source.
+Now generate the training data from your fediverse server's database using `python data.py -d 'dbname=test user=postgres password=secret host=localhost port=5432'`. You can skip this step if you have collected training data from another source.
Next, train the network with `python train.py`, which may take several hours. It's a lot faster when using a GPU. If you need advanced features when training, you can also train using [run_clm.py](https://github.com/huggingface/transformers/blob/master/examples/pytorch/language-modeling/run_clm.py).
-Finally, create a Mastodon application for your bot account and run the bot with `python bot.py -i mastodon.instance -t access_token -n 'Initial text'`. You can create a cron job to run this periodically. Enjoy!
+Finally, create an application for your bot account and generate an access token. Run the bot with `python bot.py -b server_type -i fediverse.instance -t access_token`. You can omit `-b server_type` for Mastodon and Pleroma. To run the bot periodically, create a cron job. Enjoy!
+
## Resources
diff --git a/bot.py b/bot.py
index 76f2952..a9d661f 100644
--- a/bot.py
+++ b/bot.py
@@ -1,11 +1,12 @@
from argparse import ArgumentParser
from random import randint, choice
-from mastodon import Mastodon
from transformers import AutoTokenizer, AutoModelForCausalLM
parser = ArgumentParser()
+parser.add_argument('-b', '--backend', choices=['mastodon', 'misskey'], default='mastodon',
+ help='fediverse server type')
parser.add_argument('-i', '--instance', help='Mastodon instance hosting the bot')
parser.add_argument('-t', '--token', help='Mastodon application access token')
parser.add_argument('-n', '--input', help='initial input text')
@@ -76,12 +77,23 @@ output = tokenizer.decode(model.generate(
print(output)
-# Post it to Mastodon
-mastodon = Mastodon(
- access_token=args.token,
- api_base_url=args.instance
-)
+# Prepare the post
post = output.split('\n')[0]
if len(post) < 200:
post = output.split('\n')[0] + '\n' + output.split('\n')[1]
-mastodon.status_post(post[:500])
+
+
+# Post it!
+if args.backend == 'mastodon':
+ from mastodon import Mastodon
+
+ mastodon = Mastodon(
+ access_token=args.token,
+ api_base_url=args.instance
+ )
+ mastodon.status_post(post[:500])
+elif args.backend == 'misskey':
+ from Misskey import Misskey
+
+ misskey = Misskey(args.instance, i=args.token)
+ misskey.notes_create(post[:500])