A Maker's mentality - Solving my problems
Re-writing of an old LinkedIn / Twitter post
As a machine learning person, I always dreamt to use ML and optimization to build tools that will improve my life and the life of others.
However, ML models are useless artifacts on their own. Data analysis is probably more useful, giving insights and sometimes actionable items (e.g., my French Journey series). The following obstacles were problematic:
- Absence of data: a lot of the things that I want to optimize about my life require building data-collection tools and protocols, and of course, consistency in the data entry.
- ML models / DA insights are useful when served and wrapped by the right software designs (and sometimes hardware) to make them come to life, and serve a purpose.
~5 years ago, I didn’t have that much skills in tooling or building non-scientific software.
But this changes dramatically, thanks to advances in the tooling.
- Front-end:
- I started with Streamlit, which made it possible to build interfaces to serve the value of the model to users and interact with them. Streamlit is great for simple interfaces, but once pushed too hard, you start to feel its limits.
- Last year, I added Jinja2, HTMX, TailwindCSS to the mix. This was a huge upgrade over Streamlit only: easier to customize and control. They do add substantially more complexity though, but it is controllable so far.
- Hopefully somewhere this year I will buildup more javascript capability as well.
- DB and its management:
- I used MongoDB for a while, mainly because of my familiarity with JSON format, and because of the convenient managed hosting services of Mongo Atlas.
- However, some painful experiences with MongoDB, while doing the HN analysis and some other personal projects (more on that in a later post) shifted me towards PostgresDB.
- Now, with Supabase, I get PostgresDB, authentication management, storage, with few clicks. This is much easier, and sufficient in most cases, and cheaper, than using AWS or GCP directly.
- And when there is a data-heavy application I am running, self-hosting the DB is the way to go.
- Backend: Python + Flask
- Hosting and serving
- I host mainly on my Pi 4, 4-gb. If necessary, a cloud-based VM will do.
- Docker container and Docker-compose orchestrator
- Tunneling using Cloudflare and (and sometimes ngrok), which made it easy to expose my work.
I need to have the capability to solve my own problems and the problems of those around me. Gaining experience with these tools unlocks huge potentials. Now I can finally go to maker mode, and tinker with ideas. The motto of the day is:
See a need, fill a need 😊