Origin of the Requirement:#
I really enjoy reading blogs. However, I always come across some interesting blogs waiting to be discovered. So, I started following some recommended blog channels. At the same time, I also wanted to share some of my favorite blogs, but it was too cumbersome to manually copy them every time. So, I thought of creating an automated way to do it, and also learn Github Action in the process.
Journey#
- Most of the time was spent on figuring out how to obtain the OPML file. The documentation provided by Tiny Tiny RSS was quite concise, and most of the online resources were limited to deployment instructions. So, I had to figure it out on my own.
- The web version of TinyTinyRSS provides a button to export OPML, and the URL it points to is
http://example.com/tt-rss/backend.php?op=opml&method=export
. However, it requires authentication. You need to log in. - The examples provided a login API call, so my initial idea was to follow that. I thought of directly adding the data parameter. But after trying various ways to add it, I couldn't get it to work.
- Later, I noticed that a successful login returns a session value, so I tested it with curl.
# Login and get Session ID SESSION=$(curl -s -d '{"op":"login","user":"user","password":"password"}' http://example.com/tt-rss/api/ | python -c "import sys, json; print(json.load(sys.stdin)['content']['session_id'])") # Get opml file curl -o my_tiny_tiny_rss.opml 'http://example.com/tt-rss/backend.php?op=opml&method=export' --cookie "ttrss_sid=${SESSION}"
- I rewrote it in Python using requests. Looking back now, I realize that this should be a pretty basic operation, and I had encountered sessions before. If I had remembered earlier, I could have saved some time.
- The web version of TinyTinyRSS provides a button to export OPML, and the URL it points to is
- There are ready-made libraries for parsing OPML, so I used one.
- Then, I extracted some personal information and wrote it in a configuration file. I encountered a pitfall here.
data = {'op': 'login', 'user': user, 'password': password}
. Initially, I wrote it like this:data = f"{{'op': 'login', 'user': {user}, 'password': {password}}}"
. Although they look the same in form, the former is a JSON object, while the latter is a string. This also reminded me that although Python has dynamic typing, I still need to be careful with type errors. - Finally, I used Github Action. I had used it before, but I used pre-written workflows. So, I also spent some time learning. The problems I encountered were:
- Format issues with the YAML file. This can be checked using YAML Validator. I think Vscode should also have a corresponding plugin.
- The variables needed at runtime are stored as secrets. I used to think that the value of a secret can only be a string. But in Getting Github repository secrets in Python, it is mentioned that you can put an entire YAML file in the value. So, I thought that a JSON file should also work. I tried it and it worked. This way, there were very few changes needed in my code.
- Triggering the workflow required adding manual triggering, which requires adding
workflow_dispatch:
.
Knowledge Gained#
- Combination of pipes and Python. The following line was written by ChatGPT, it's amazing.
SESSION=$(curl -s -d '{"op":"login","user":"user","password":"password"}' http://example.com/tt-rss/api/ | python -c "import sys, json; print(json.load(sys.stdin)['content']['session_id'])")
- Usage of Github Action
- Python requests
Conclusion#
This project is considered a very small project, but it still took me half a day, with the help of ChatGPT. I came across a statement before that search engines have greatly reduced the difficulty for ordinary people to acquire knowledge, and ChatGPT has objectively reduced it even further. Based on my experience with this project, I strongly agree with this viewpoint. Through the background information I provided and the questions I asked, ChatGPT saved me the time I would have spent on various tutorials and incomplete documentation. This interaction is more natural than what search engines can achieve.