I Built the Dumbest AI Startup I Could in 90 Minutes.

Before we get into the deets, here are the links if you want to check out what I built:

Yesterday I participated in a 90 vibe-marketing sprint competition. The challenge was to see what you could come up with in 90 minutes as a marketing campaign for an existing or a created product.

Before I began, I asked the organizers—twice—whether I should build something useful or just something cool. Someone told me, “Cool is useful, in marketing.” I don’t really agree with him. I still don’t. But I went for it anyway.

The gang

Earlier that week, I had trained a small, but cool image classifier to determine whether something was a bird… or not. It was my first time training an ML model so was very happy with it. I was proud.

The brand I decided to build was centered around the bird/not-bird classifier. A company that tries its best and comes up with not very helpful models but is very proud of the work they do. I don’t know, maybe it’s a reflection of how I feel here at NS. I am far behind most people here (at least technically), but I make things anyway and I think I’m improving (:

Yes, the universe is satire and takes itself too seriously, but I think I wanted to actually celebrate an entity just trying its best.


Courage, not accuracy

Anyway, enough moping about, over the 90 minutes had, I built a homepage and a brand identity, a fake TED talk and a TED page, and a merch store (complete with lore!).

But I think the thing I was most proud of was the Twitter feed.

For me, it was where the whole world of the project came to life. I built out a cast of fictional users, confused journalists, loyal customers, and rogue employees. People tweeting about how Bird/Not Bird misclassified their ex. News agencies reporting on our latest “breakthrough” with deeply concerning enthusiasm. Team members sharing proud screenshots of our tool misidentifying a cloud as “possibly bird.”

I couldn’t show most of it during the final presentation—I only got two minutes, and there’s no time in that to showcase a fake Slack meltdown about whether a hotdog counts as “bird-adjacent.” But it’s there, and it’s part of the experience.

Also: I recorded the entire process. All 90 minutes of it. From the moment I opened a browser to the last fake customer tweet. If you’re curious about how something like this gets made from nothing, I’m happy to upload it somewhere.

Karol, one of my favorite people at NS, called it a very high effort shitpost.

Fair.

(But fun).

Three Days, Two Models, and a Thousand Questions: My FastAI ML Journey

What I Learned While Building Image Classifiers From Scratch


Reading about four words into this post will automatically tell you that this post was written by an AI. Yes, it is annoying. Yes I chose convenience over quality. Yes, I feel bad. I threw in a few sentences out of my very own head in italics so please don’t judge me too much.

Over the course of three intense days, I dove headfirst into the world of machine learning with FastAI. My goal? To build working image classifiers, learn by doing, and avoid passively watching tutorials. What followed was a whirlwind of wins, roadblocks, and a lot of late-night debugging. Here’s what happened, and what I learned.


Day 1: Chasing Birds Instead of Dogs

I began with the FastAI tutorial, which focuses on creating a dog breed classifier using the Oxford-IIIT Pets dataset. But on the first page, I stumbled upon a link to a Kaggle notebook that proposed a simpler challenge: building a bird vs. not-bird classifier.

Intrigued, I decided to deviate from the plan.

The Kaggle notebook used the DuckDuckGo API to fetch images, but I quickly discovered that the API no longer worked. After some research, I switched to the Bing Image Search API. This required setting up Azure credentials and writing a script to collect bird and non-bird images.

However, once I had the images, I ran into another problem: they weren’t formatted or stored correctly. FastAI expects a certain folder structure, so I used Bash to create directories like /bird and /notbird and moved images accordingly.

Using ResNet-34 for transfer learning, I trained my model across 4 epochs. The results were promising. I even added a neat feature: the ability to input an external image and display it with its predicted label and confidence score overlaid on top.

Cutting edge technology


Day 2: Dog Breeds, Batch Sizes, and Bottlenecks

It was time to revisit the original tutorial: the dog breed classifier. But I wanted to do it from scratch. No pre-written notebooks. Just raw code and the Oxford dataset.

The original plan used ResNet-34, but training was painfully slow, even on Colab’s free GPU. I tried switching to ResNet-18 for speed, which helped, but my accuracy plateaued at around 93%.

To optimize:

I reduced batch size to avoid frequent CUDA memory errors. (Look at images only a few at once please)

I tried EfficientNet-B3, which promised better accuracy at similar speeds. (Stop looking. I’ll find someone else to look instead)

I increased image resolution to 300×300. (The images are now clearer)

I added data augmentation with aug_transforms(). (The images are now various different versions of the same image. “Hey look a blur! is that still the same image??”)

I trained for 5 epochs instead of 4. (Practice makes perfect?)

Despite these tweaks, my validation loss remained stubbornly high. Some class labels in the dataset (like Great Pyrenees) seemed mislabeled, further complicating training.

Key lesson: model choice matters, but data quality and compute constraints play an equally large role.


Day 3: Frustrations, Frontends, and Future Plans

With two models under my belt, I wanted to push further.

First, I attempted to build a frontend for the bird-not-bird classifier using Replit (btw, if you want frontend, use Lovable. If you want logic, use cursor. Replit is like an ugly middle child that sucks both ways). But loading the trained .pkl model into the environment proved difficult. Replit didn’t play well with FastAI’s export structure, and the frontend never fully came together.

Then, I explored a new idea: a car brand classifier using a Stanford dataset. While I managed to begin data prep and explore the set, I ran out of time before completing model training.

Still, the seeds are planted.


Lessons Learned

Start with code, not videos. FastAI is best learned by doing.

APIs break. Be ready to adapt.

Folder structure matters more than you think.

ResNet-18 is fast, but has a ceiling. Go bigger if you can.

Don’t underestimate data quality. Labels can sabotage your results.

Colab’s GPU is your friend, but memory constraints will require workarounds (like batch size tuning).

Use vision_learner, DataBlock, and aug_transforms() wisely. They give you flexibility without requiring a PhD in ML.


Next Steps

Looking ahead, I’d like to:

Finish the car brand classifier.

Explore a movie recommendation system.

Deploy a working frontend.

Help at least one other learner build their own model.

If you’re just starting out: don’t wait. Open a notebook, ask questions, and start building. You’ll learn more from one failed run than ten perfect videos.


Got questions or stuck somewhere? I’m happy to help. Reach out or drop your notebook link.

The story so far

My name is Kovid, and this is my story.

A 19-year-old boy steps into a job interview for the first time. Nervous and worried, he can only wonder if he will face a disaster. The interviewer asks him, as all interviewers do, “Tell me about yourself!”. He replies “Hi, my name is Kovid, I’m 19 years old, and somebody recently named a global pandemic after me.”

Continue reading “The story so far”

Assignments for 2023

A new year is here and has brought new aspirations along with it. Last year was a period of much growth and change, I hope I can carry them forward this year.

I view this year, not as a linear set of objectives to be attained by the end of the year, but more as a series of alternative avenues to be explored. As such, I might end up achieving all. or some, or none of these goals by the end, but might have found something else interesting instead.

It will be a fun time, and I can’t wait to get started. Here are my broad goals for 2023 (in no particular order) :

Continue reading “Assignments for 2023”

Your guide to finding your next Remote Job

The post-pandemic world is increasingly moving towards greater integration and globalization. As work becomes more and more collaborative and online, remote jobs are becoming the new standard.

Having observed this industry for the past year and worked in various remote environments, I have learned a little bit about how to navigate this new and highly rewarding job market. I’m about to start a second round of my own job hunt, and have decided to create a resource that will help job seekers find the right opportunities a little faster.

In this article, I have prepared a small curated list of online resources that you can use to find your next remote job. My article includes job listings (where multiple companies post their openings simultaneously), as well as remote companies(which are frequently hiring). The resources I mention hire people from Sales, Marketing, Operations, Customer Success, Design, and Development; Among others. Finding the right job is half the journey to getting the right job, and knowing where to look is half the victory. Let’s get started:

Continue reading “Your guide to finding your next Remote Job”

Annual update: 2021

Hey there! It’s been a while (:

I made this website about a year ago as I was heading into my self-learning journey. I did not know what the future would behold, and looking back, I could’ve never guessed it would be something like this.

My journey has been challenging, exciting, and has grown me in ways I didn’t think were possible.

Unfortunately, I haven’t been able to keep this website updated with all that has been happening in my life. So, I am writing this article.

Here’s what has happened since you last saw me here.

Continue reading “Annual update: 2021”

I Built an Operations Pipeline for The Athletic

The Athletic is a famous international sports journalism portal. It’s one of the few sports journalism portals that combines solid sports research with an entertaining writing style.

The Athletic features reputed writers and experts from all over the world. Journalists and reporters of the highest quality, all working from different time zones and interacting with different people, but for the achievement of the same cause.

This week I’ve been wondering, how does The Athletic coordinate its highly heterogeneous workforce and all their projects simultaneously? With strict deadlines on many of its content uploads, what is the system that it uses to always ensure everything is on time?

With this in mind, I came up with a mock collaboration system myself.


I created a content pipeline along with an integrated team database for The Athletic. A single integrated funnel for all its projects being made across the globe.

I think having a single, flexible database for the workflow of all its creators can really help solve logistical problems to a very large extent.

This database was made using Airtable,and here is a video explanation of the project.


In the database, I collect and assimilate all the (mock) articles that are being worked on right now in a single place. The example authors are actual authors who write for The Athletic, everything else (including all the blog posts) was my own idea. The primary table gives an easy way to study all the relevant information about the articles currently in the works at a single glance.

With an easy channel of messaging on each primary cell, the table also allows for seamless communication between staff members and authors of The Athletic, along with a working history of every edit made in the field for better context.

Apart from the primary database built in a spreadsheet, I used some of the data to create better, more intuitive views. Thus, I used the deadline dates to build a calendar with all the deadlines of every article pre-marked and color-coded with the current status. As it has two-way linkability, any change on the deadlines could be made on either the calendar or the spreadsheet and will show up in all the other places automatically.

Similarly, I also used the status table in the spreadsheet to create a Kanban board with all the tasks arranged under buckets according to their current status. The Kanban board provides an intuitive way to understand how far along every project is, and allows for simple drag and drop to change article statuses. 

I also built a different database for all the team members and linked it with the primary database based on the project that they are currently working on. So, I could assign new projects, delete them or edit them from either of the pages, but the team database also gave me more context on some other information on the team members. This way someone could keep track of the team in the same place as where they keep track of the projects, but without having any unnecessary information crowding up space.

To make the transition easier, I also wrote down an SOP (standard operating procedure) for staff members that can be referenced here.

Happy reading (:

Designing ads and building landing pages for Pesto

Pesto is revolutionizing the way Indians look at education and career development. 

It has uplifted dozens of I.T workers from their mundane jobs and put them into the driver’s seat of careers that seem meaningful to them. Careers that let them hone their skills and grow alongside the company, as well compensate them the way that they deserve.

Through an intensive 12-week training program, participants of Pesto learn, unlearn, and relearn core business skills, soft skills, effective communication, and of course, advanced software engineering.

The Pesto programme is not for everyone. In fact, a typical Pesto participant usually has 2+ years of working experience prior to joining the programme. But Pesto places no restrictions on the people who can apply to the programme. It does not care about your degree, your credentials, or your CV.

As long as you have a body of work to prove your worth, and an innate drive to learn, Pesto will welcome you with open arms.

Then launch you to the career of your wildest dreams.


Pesto’s vision and philosophies resonate with me deeply, and this week, I’ve been building projects the way I would have if I had been working with them.

Continue reading “Designing ads and building landing pages for Pesto”

My guide to getting work done

I am a somewhat lazy person.

This is incredibly helpful for me. It allows me to usually get my work done with time to spare.

To think of all those nights spent rote learning and working super hard to hand in my assignment the next morning. Fair to say, I’ve learned a few things about time management since then.

By implementing techniques and principles learned in some really awesome books, and listening to advice from workhorses masquerading as people, I now manage to save a good amount of hours each week after finishing my week’s work.

Continue reading “My guide to getting work done”