My Blogs

My Experiences, Lessons, and Technical Journeys

The Lazy Developer’s Guide to Automation: How I Made GitHub Work for Me
January 14, 2025

The Lazy Developer’s Guide to Automation: How I Made GitHub Work for Me

The Lazy Developer’s Guide to Automation: How I Made GitHub Work for Me Picture this: It’s another busy day at work, and I’m juggling multiple hotfixes for our Eduport . Each fix requires creating a pull request, following the proper formatting, linking the correct ticket ID, and ensuring it goes to the right branch. It’s not rocket science, but it’s repetitive, time-consuming, and frankly, boring. As a developer who believes in the DRY (Don’t Repeat Yourself) principle, this manual PR creation process felt like a personal affront to my lazy — I mean, efficient — nature. The Breaking Point After the 5th PR of the day, I had enough. My inner voice screamed, “There has to be a better way!” That’s when it hit me: if I was going to be lazy, I needed to be smart about it. The best developers aren’t the ones who enjoy repetitive tasks; they’re the ones who automate them away. The Solution: GitHub Actions to the Rescue I decided to create a GitHub Action that would handle the entire PR creation process automatically. The concept was simple: embed all the necessary information in the commit message, and let the automation handle the rest. Want to create a PR? Just include “pr to” and a ticket ID in your commit message, and boom — the robot takes care of everything else. Here’s what my lazy (but brilliant) solution does: Creates PRs automatically based on commit messages Extracts ticket IDs and links them properly Handles branch targeting with a fallback mechanism Applies a standardized PR template Manages protected branch rules The Magic Format The beauty lies in its simplicity. Instead of navigating through GitHub’s UI, all I need to do is: git commit -m "feat: add awesome feature pr to main with 12345" That’s it. No clicking through web interfaces, no copy-pasting ticket numbers, no filling out PR templates. The action takes care of everything, creating a perfectly formatted PR with all the necessary components. Why This Makes Me a Better Developer Some might say this is just lazy. I say it’s strategic laziness. By automating this process, I’ve: Eliminated human error in PR creation Standardized our team’s PR format Saved countless hours of manual work Freed up mental space for actual problem-solving And there is an option to edit pr so this is not an end. The Ironic Truth Here’s the thing about lazy developers: we often work harder initially to work less later. The time I spent creating this GitHub Action was probably more than what I’d spend creating PRs manually for a month. But that’s not the point. The point is that every automated task is a small victory against tedium, a step toward a more efficient workflow. Conclusion They say lazy people find the easiest way to do things. I prefer to think of it as finding the smartest way. In software development, automation isn’t just about being lazy — it’s about being efficient, consistent, and focusing on what truly matters: solving problems and creating value. So the next time someone calls you lazy for automating your workflow, remember: you’re not lazy, you’re just living in 2025 while they’re stuck in the manual labor of 2020. P.S. If you’re interested in implementing this yourself, check out my GitHub Action configuration. Because sharing automation is caring… and also because I’m too lazy to keep explaining how it works to everyone who asks. Originally published by a proudly lazy developer who now has more time to write blog posts about being lazy. Github Action: Here If you enjoyed this post, connect with me on LinkedIn and follow me on GitHub for more fun project stories, creative experiments, and ideas that go beyond typical development. Let’s connect, share, and keep building cool things together! LinkedIn: sunithvs GitHub: sunithvs website: sunithvs Follow along for more unique project stories and engineering adventures!

How I Built & Deployed a Real-time Polling App in 30 Minutes
January 8, 2025

How I Built & Deployed a Real-time Polling App in 30 Minutes

So today during our sprint meeting, something pretty relatable happened. We needed to run some quick polls, but got annoyed with all the ads and complex UIs in the existing tools. You know how it goes — you just want to create a simple poll, but end up clicking through five screens and closing popup ads 🤦‍♂️ Live Link: polls.radr.in The Lunch Break Challenge I figured — hey, why not build something super simple? Had about an hour during lunch, so I decided to give it a shot. Having just learned about Supabase’s real-time features, I thought — perfect opportunity to put this knowledge to use. The goal was straightforward: Create polls quickly See results in real-time No ads, no fluff The Implementation Journey Step 1: Planning with AI First, I turned to Claude to help me structure the project. I asked for a detailed prompt to generate this application, and it provided an extensive specification covering everything from database schema to user flows. Here’s what the conversation looked like: Me: I need to create a poll app using next js and supabasewhere a user can create polls and share a unique url to send to other usersso when other user opens the link they will get the poll with question they can submit the poll with thire name (name mandatory) and then go to a results pagewhere live updates ( use supabase realtime) to upodate the resultsthe user created the poll can edit the poll and clear the results and after editing the poll also clear the resultscreate a detailed prompt to generate this using bolt or v0 dont write the code just a detailed prompt Claude came back with a comprehensive prompt : Create a real-time polling application using Next.js and Supabase with the following specifications:Database Schema:1. Polls table - id (uuid, primary key) - creator_id (reference to users) - question (text) - created_at (timestamp) - last_edited_at (timestamp) - is_active (boolean) - url_slug (unique string for sharing)2. Options table - id (uuid, primary key) - poll_id (reference to polls) - option_text (text)3. Responses table - id (uuid, primary key) - poll_id (reference to polls) - option_id (reference to options) - respondent_name (text, not null) - submitted_at (timestamp)Features and Pages:1. Poll Creation Page ('/create') - Form to input poll question - Dynamic addition of poll options (minimum 2) - Auto-generation of unique URL slug - Success message with shareable link - Error handling for form submission2. Poll Voting Page ('/{poll-slug}') - Display poll question and options - Required name input field - Submit button for vote - Validation for duplicate votes - Redirect to results page after voting - Error handling for invalid poll URLs3. Poll Results Page ('/{poll-slug}/results') - Real-time results using Supabase subscriptions - Bar chart or visual representation of votes - List of respondents with timestamps - Percentage breakdown of each option - Total vote count4. Poll Management Page ('/manage/{poll-slug}') - Edit poll question and options - Clear all results functionality - Confirmation modals for destructive actions - Auto-update of last_edited_at - Clear results after editingTechnical Requirements:1. Authentication - Simple authentication for poll creators - Public access for poll voters2. Real-time Features - Supabase real-time subscriptions for live results - Optimistic updates for better UX3. Data Validation - Server-side validation for all inputs - Prevention of duplicate votes - Required name field validation4. URL Management - Unique, readable URL slugs - Proper error handling for invalid URLs5. State Management - Client-side state for form handling - Server state for real-time updates6. Error Handling - Graceful error handling for all API calls - User-friendly error messages - Loading states for all async operations7. Responsive Design - Mobile-first approach - Accessible UI components - Clear visual hierarchyAdditional Features:1. Copy to clipboard functionality for share URLs2. Loading skeletons for better UX3. Toast notifications for actions4. Basic analytics (vote counts, submission times)5. Rate limiting for submissionsThe application should follow these flows:1. Poll Creation Flow: - Creator fills form → Validates input → Generates URL → Shows success with share link2. Voting Flow: - User opens shared link → Enters name → Submits vote → Redirects to results3. Results View Flow: - Real-time updates of votes → Visual representation → List of respondents4. Edit Flow: - Creator accesses management page → Makes changes → Confirms → Clears existing votesEnsure all database operations are optimized and implement proper error boundaries for production use. Step 2: Database Setup I took this prompt and asked Claude to generate the actual database SQL. Within seconds, it provided the complete schema: CREATE EXTENSION IF NOT EXISTS "uuid-ossp";-- Create polls tableCREATE TABLE polls ( id UUID PRIMARY KEY DEFAULT uuid_generate_v4(), question TEXT NOT NULL CHECK (char_length(question) > 0), created_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP, last_edited_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP, is_active BOOLEAN DEFAULT true, url_slug TEXT UNIQUE NOT NULL CHECK (char_length(url_slug) > 0), CONSTRAINT valid_question_length CHECK (char_length(question) <= 500));-- Create poll options tableCREATE TABLE options ( id UUID PRIMARY KEY DEFAULT uuid_generate_v4(), poll_id UUID NOT NULL REFERENCES polls(id) ON DELETE CASCADE, option_text TEXT NOT NULL CHECK (char_length(option_text) > 0), created_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP, CONSTRAINT valid_option_length CHECK (char_length(option_text) <= 200));-- Create responses tableCREATE TABLE responses ( id UUID PRIMARY KEY DEFAULT uuid_generate_v4(), poll_id UUID NOT NULL REFERENCES polls(id) ON DELETE CASCADE, option_id UUID NOT NULL REFERENCES options(id) ON DELETE CASCADE, respondent_name TEXT NOT NULL CHECK (char_length(respondent_name) > 0), submitted_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP, CONSTRAINT valid_respondent_name_length CHECK (char_length(respondent_name) <= 100));-- Create indexes for better query performanceCREATE INDEX idx_polls_url_slug ON polls(url_slug);CREATE INDEX idx_options_poll_id ON options(poll_id);CREATE INDEX idx_responses_poll_id ON responses(poll_id);CREATE INDEX idx_responses_option_id ON responses(option_id);CREATE INDEX idx_responses_submitted_at ON responses(submitted_at); Step 3: Project Setup With the database schema ready, it was time to create the project: npx create-next-app -e with-supabase poll Step 3: The Magic Happens The real fun began when I fired up WindSurf and started building. The AI prompts gave me a solid foundation, but now it was time to bring it to life. Here’s how the conversation went: First, I started with the prompt from Claude and asked for a basic polling app. It gave me a functional app right out of the gate! The best part? The real-time features were already included — I just needed to enable them in Supabase. The UI was’nt that much great at first time. Just this one line, and boom! The create page went from plain to pretty, while keeping all the features intact. With This I got a great UI for the create,poll and results page. And I hosted the first version in vercel and connected polls.radr.in. I needed a landing page for it so continued the prompting And I got a stunning animation which you can see in the website polls.radr.in. The Fun Part The best thing? Made it just in time before the meeting resumed, and we actually used it for the rest of our polls! Sometimes skipping a meal is worth it when you’re in the flow 😄 . Those hours spent learning Supabase really paid off — from experimentation to actual use in just a week. The code’s on GitHub if anyone wants to check it out. Nothing fancy, just a simple solution to an annoying problem! Github Repo: https://github.com/sunithvs/poll-flow If you enjoyed this post, connect with me on LinkedIn and follow me on GitHub for more fun project stories, creative experiments, and ideas that go beyond typical development. Let’s connect, share, and keep building cool things together! LinkedIn: sunithvs GitHub: sunithvs website: sunithvs Follow along for more unique project stories and engineering adventures!

Git Worktree: Advanced Git Techniques for 10x Developer Productivity
November 23, 2024

Git Worktree: Advanced Git Techniques for 10x Developer Productivity

Stop context switching between branches and boost your development workflow with Git’s hidden powerhouse feature. The Developer’s Git Challenge If you’re working on multiple Git branches simultaneously, you’ve probably experienced this: Feature development interrupted by urgent production bugs, constant branch switching, and the dreaded git stash dance. Sound familiar? There's a powerful Git feature that could transform your workflow. At Eduport, I managed different tasks by making multiple copies of the code and switching between them. While this let me work on new features and quick fixes at the same time, it made it tricky to keep everything in sync, especially when dealing with database migrations and change in environment variables, then I found git worktree Advanced Git Techniques: Introducing Git Worktree While most developers rely on basic Git commands, experienced users leverage Git Worktree to maintain multiple working directories connected to the same repository. This advanced technique eliminates the overhead of branch switching and context management. Why You Should Use Git Worktree Handle multiple branches simultaneously Zero context switching overhead Maintain separate development environments No more git stash hassles Clean separation of concerns Improved code organization Worktree Implementation Here the list of commands used to add worktree git clone <repository-url>cd <repository-name># Create parallel working directoriesgit worktree add ../path branch-namegit worktree add ../hotfix urgent-fixgit worktree add ../feature-1 new-developmentgit worktree add ../debug debug/production-i Handling Production Emergencies Traditional Git Workflow: git stash save "feature work"git checkout productiongit checkout -b hotfix/bug# Fix buggit checkout maingit stash pop Advanced Worktree Workflow: git worktree add ../hotfix hotfix/bugcd ../hotfix Its super easy right 🤩 Advanced Git Techniques: Quick Reference Essential worktree commands for improved productivity: # List all worktreesgit worktree list# Add new worktreegit worktree add ../path branch-name# Remove worktreegit worktree remove ../path# Cleanup stale worktreesgit worktree prune Remember to keep it simple: Use clear directory names Group worktrees in one parent folder Clean up when done Advanced Git workflows require practice. Start with simple scenarios and gradually incorporate more complex patterns as your team adapts. For more advanced usage and full documentation refer here. The transition from traditional branch switching to Git worktree management is an investment in your development workflow that pays dividends in productivity and organization. If find this post useful, connect with me on LinkedIn and follow me on GitHub for more fun project stories, creative experiments, and ideas that go beyond typical development. Let’s connect, share, and keep building cool things together! LinkedIn: sunithvs GitHub: sunithvs website: sunith Follow along for more unique project stories, productivity tools and engineering adventures!

Minglikko: A Valentine’s Day Project Born from Engineering Curiosity
November 19, 2024

Minglikko: A Valentine’s Day Project Born from Engineering Curiosity

Ever heard of a project that started with a simple quest to find a Valentine? Let me tell you about Minglikko one of the best memories from cusat, a wild ride of creativity and engineering that began in February 2022! The Origin Story Picture this: Sahil, Rohit, Varsha, Nihal, and Sabeeh are brainstorming how to find a Valentine for Varsha. But being engineers, we couldn’t just settle for a typical matchmaking approach. We thought, “What if we create something unique?” Our initial idea was simple — a Google Form will be filled by users those who want to find a Valentine and that matches people based on interests. But we wanted more. There our design wizard Amrutha Chechi enters and transformed our basic concept into an amazing website design. The Challenge The google form is to much limted and Airtable’s has the features but free plan has some limitations , so we decided to build a full-fledged platform where users could: Create a login Answer interesting questions Remain completely anonymous Mark their priorities Perform match making algorithm Chat with matched valentine. Amrutha Dinesh’s UI was so fantastic it put us under pressure to release quickly (and this was before ChatGPT existed — imagine that! 😂) Without that design, we wouldn’t have envisioned such a comprehensive platform or achieved that level of outreach. Kudos to Amrutha chechi! The questions and texts were placeholders in actual website there are some changes.Launch and Buzz We dropped a “Coming Soon” poster with the name “Minglikko” — and boom! Curiosity exploded in and around CUSAT. Random friends and strangers were sliding into our DMs, asking, “What is this?” Within just hours of launching, we saw incredible traction — from initial 100 registrations in 1 hr to over 500 registrations by midnight. Turns out, everyone was desperate to find a Valentine! 😂 On the night of February 13th, we faced a critical challenge — our matching algorithm wasn’t ready. Despite the website launch, we continued working intensively to develop a robust matching system throghout night.The matching algorithm was a team effort, with Sahil Athrij playing a crucial role in developing the core logic. Together with Rohit, Varsha, Shaheen, Nihal Muhemmed, and Sabeeh, we crafted it. Our dedication paid off when we successfully presented a research paper about this algorithm at The Gandhigram Rural Institute , Dindigul District , with Sasi Gopalan Sir as our mentor. Questions and Gender “Feature” When we released the website, some of my friends complained that there was no section for entering gender (we did this intentionally, not by accident!). As feature requests kept piling up, we — well, actually Rohit — decided to add a gender selection box on the homepage. He included an extensive list of 140 genders just for fun, but it was purely cosmetic. The matching algorithm didn’t consider gender at all, and we didn’t even store the selected data. 😄 The questions we asked were designed to be a little quirky and fun, bringing out each person’s personality. Here’s the list: Rate your Brains.🧠(0: Brain Potato, 5: Omniscient) Show me your biceps.💪(0: Pappadam, 5: Hercules) Beauty undo?(0: Mirrors scare me, 5: Cleopatra) How Charismatic you are?(0: Bed is my valentine, 5: I sell sand in Sahara) How much money you burn?🤑(0: Starving to Death, 5: I pave golden roads) Generosity, yes rate it.😇(0: I burn orphanages, 5: Karl Marx) You die for God?(0: I am become Death. -J Robert Oppenheimer, 5: I am become Death. -Krishna) Your connection with Liberalism🧐(0: Girls? No School!!, 5: Martin Luther King) Each question had up to 5 points, but users could only give a total of 20 points across all questions. This made them think carefully about which traits to prioritise, guessing what would matter most to their perfect match — a fun little game of planning to find their Valentine! The Fun Finale We finaly released the matches and our Valentine’s mission was a success. We created a platform that helped Varsha find her Valentine and provided opportunities for others. What started as a friend’s matchmaking quest turned into a memorable campus memory. Technical Journey As a Django pro 😎, backend development was my playground. The server-side logic and database management flowed smoothly, with API integration happening at lightning speed. However, the frontend was adifferent story — a real challenge that had us scratching our heads.Stepping up to the plate, SANU MUHAMMED brought his UI expertise and completely transformed our basic interface. Chat System: Implemented end-to-end encryption using Signal Protocol, ensuring user privacy and secure communication Backend Infrastructure: Used Django for robust and fast server-side development Matching Algorithm: Developed a custom algorithm to connect compatible users based on their interests and preferences Anonymous Identities: Created unique code names like “Shikkari Kuyil” to protect user anonymity Real-time Communication: Utilized Django Channels for seamless, instant messaging AWS : used aws ec2 for hosting the entire platform. Collaborative Development: Team effort involving Sahil, Rohit, Varsha, Shaheen, Nihal ,Sanu and Sabeeh If you enjoyed this post, connect with me on LinkedIn and follow me on GitHub for more fun project stories, creative experiments, and ideas that go beyond typical development. Let’s connect, share, and keep building cool things together! LinkedIn: sunithvs GitHub: sunithvs website: sunithvs Follow along for more unique project stories and engineering adventures!

Optimising Django Queries to Overcome the N+1 Problem!
November 11, 2024

Optimising Django Queries to Overcome the N+1 Problem!

As a Django developer, you may have encountered a common performance issue called the N+1 query problem. This can severely impact the speed and efficiency of your application, especially as your codebase and data grow. In this blog post, we’ll dive into what the N+1 query problem is, why it’s a problem, and how you can easily solve it using Django’s powerful tools. What is the N+1 Query Problem? Imagine you have a Django application with three models: Company, Employee, and Project. You want to display a list of all companies, along with the names of their employees and the projects those employees are working on. class Company(models.Model): name = models.CharField(max_length=100)class Employee(models.Model): name = models.CharField(max_length=100) company = models.ForeignKey(Company, on_delete=models.CASCADE, related_name='employees')class Project(models.Model): name = models.CharField(max_length=100) employees = models.ManyToManyField(Employee, related_name='projects') Without any optimizations, your view might look something like this: class CompanyListNoOptimisationView(View): def get(self, request): companies = Company.objects.all() # 1 query for all companies data = [] for company in companies: company_data = { 'name': company.name, 'employees': [ { 'name': employee.name, 'projects': [project.name for project in employee.projects.all()] # 1 query per employee for projects } for employee in company.employees.all() # 1 query per company for employees ] } data.append(company_data) return JsonResponse(data, safe=False) In this scenario, the initial query fetches all the companies. But then, for each company, an additional query is made to fetch the employees, and for each employee, another query is made to fetch the projects. This results in a total of N+1 queries, where N is the number of companies. Query Count Breakdown Let’s consider an example scenario with the following data: 2 companies3 employees per company2 projects per employee In this case, the total number of queries generated would be: 1 query to fetch all companies. 1 query per company to fetch employees:2 companies = 2 queries for employees. 1 query per employee to fetch projects:2 companies × 3 employees = 6 queries for projects. Total queries: 1 (company) + 2 (employees) + 6 (projects) = 9 queries. Total time: 0.03sNumber of queries: 9Query 1: SELECT "base_company"."id", "base_company"."name" FROM "base_company"Query 2: SELECT "base_employee"."id", "base_employee"."name", "base_employee"."company_id" FROM "base_employee" WHERE "base_employee"."company_id" = 1Query 3: SELECT "base_project"."id", "base_project"."name" FROM "base_project" INNER JOIN "base_project_employees" ON ("base_project"."id" = "base_project_employees"."project_id") WHERE "base_project_employees"."employee_id" = 1Query 4: SELECT "base_project"."id", "base_project"."name" FROM "base_project" INNER JOIN "base_project_employees" ON ("base_project"."id" = "base_project_employees"."project_id") WHERE "base_project_employees"."employee_id" = 2Query 5: SELECT "base_project"."id", "base_project"."name" FROM "base_project" INNER JOIN "base_project_employees" ON ("base_project"."id" = "base_project_employees"."project_id") WHERE "base_project_employees"."employee_id" = 4Query 6: SELECT "base_employee"."id", "base_employee"."name", "base_employee"."company_id" FROM "base_employee" WHERE "base_employee"."company_id" = 2Query 7: SELECT "base_project"."id", "base_project"."name" FROM "base_project" INNER JOIN "base_project_employees" ON ("base_project"."id" = "base_project_employees"."project_id") WHERE "base_project_employees"."employee_id" = 3Query 8: SELECT "base_project"."id", "base_project"."name" FROM "base_project" INNER JOIN "base_project_employees" ON ("base_project"."id" = "base_project_employees"."project_id") WHERE "base_project_employees"."employee_id" = 5Query 9: SELECT "base_project"."id", "base_project"."name" FROM "base_project" INNER JOIN "base_project_employees" ON ("base_project"."id" = "base_project_employees"."project_id") WHERE "base_project_employees"."employee_id" = 6[11/Nov/2024 19:33:30] "GET /company-list/ HTTP/1.1" 200 379 This logs are generated by QueryCounterMiddleware more about that at end. The problem with this approach is that as the number of companies, employees, and projects grows, the number of queries will increase dramatically, leading to slow response times and high database load. Solving the N+1 Problem with select_related and prefetch_related select_related: Used for foreign key or one-to-one relationships. It performs a SQL join and retrieves the related object in a single query. prefetch_related: Used for many-to-many and reverse foreign key relationships. It performs a second query and maps the results in Python. These tools allow us to reduce query counts by loading all related objects in bulk. Here’s the same view, but now using select_related and prefetch_related class CompanyListOptimisedView(View): def get(self, request): companies = Company.objects.prefetch_related( Prefetch('employees', queryset=Employee.objects.select_related('company').prefetch_related('projects')) ) data = [] for company in companies: company_data = { 'name': company.name, 'employees': [ { 'name': employee.name, 'projects': [project.name for project in employee.projects.all()] } for employee in company.employees.all() ] } data.append(company_data) return JsonResponse(data, safe=False) Query Count Breakdown (Optimised) Now, with the optimised code: 1 query to fetch all companies. 1 query to fetch all employees with their related company data (using select_related). 1 query to fetch all projects for all employees (using prefetch_related). Total queries: 3 Total time: 0.02sNumber of queries: 3Query 1: SELECT "base_company"."id", "base_company"."name" FROM "base_company"Query 2: SELECT "base_employee"."id", "base_employee"."name", "base_employee"."company_id", "base_company"."id", "base_company"."name" FROM "base_employee" INNER JOIN "base_company" ON ("base_employee"."company_id" = "base_company"."id") WHERE "base_employee"."company_id" IN (1, 2)Query 3: SELECT ("base_project_employees"."employee_id") AS "_prefetch_related_val_employee_id", "base_project"."id", "base_project"."name" FROM "base_project" INNER JOIN "base_project_employees" ON ("base_project"."id" = "base_project_employees"."project_id") WHERE "base_project_employees"."employee_id" IN (1, 2, 4, 3, 5, 6)[11/Nov/2024 19:40:05] "GET /company-list-optimised/ HTTP/1.1" 200 379 By applying select_related and prefetch_related,we reduced the query count from 9 to 3, achieving a significant performance improvement. QueryCounterMiddleware import timefrom django.db import connectionfrom django.conf import settingsfrom django.utils.deprecation import MiddlewareMixinclass QueryCounterMiddleware(MiddlewareMixin): def process_request(self, request): if settings.DEBUG and settings.SHOW_RAW_QUERY or settings.SHOW_QUERY_COUNT: # Only proceed if DEBUG is True self.start_time = time.time() self.queries_before_request = len(connection.queries) def process_response(self, request, response): if settings.DEBUG and settings.SHOW_RAW_QUERY or settings.SHOW_QUERY_COUNT: # Only proceed if DEBUG is True total_time = time.time() - self.start_time queries_after_request = len(connection.queries) if settings.SHOW_QUERY_COUNT: query_count = queries_after_request - self.queries_before_request # Printing labels in yellow and values in green print(f"\033[93mTotal time:\033[0m \033[92m{total_time:.2f}s\033[0m") print(f"\033[93mNumber of queries:\033[0m \033[92m{query_count}\033[0m") if settings.SHOW_RAW_QUERY: # ANSI escape code for red color is '\033[91m' and reset color with '\033[0m' for index, query in enumerate(connection.queries[self.queries_before_request:], start=1): sql_query = query['sql'] print(f"\033[92mQuery {index}:\033[0m \033[91m{sql_query}\033[0m") return response The QueryCounterMiddleware is a custom Django middleware that provides a simple way to monitor the performance of your application. It works by intercepting the request-response cycle and capturing information about the database queries executed during the process. Here’s what the middleware does: Track the Number of Queries: When a request is made, the middleware stores the initial number of executed queries. After the request is processed, it calculates the difference to determine the total number of queries executed during the request. Log the Raw SQL Queries: In addition to the query count, the middleware can also print the raw SQL queries executed during the request. This can be extremely helpful for identifying the root cause of performance issues. Measure the Total Request Time: The middleware also tracks the total time taken for the request-response cycle, providing valuable insights into the overall performance of your application. How to Use the QueryCounterMiddleware To use the QueryCounterMiddleware in your Django project, follow these steps: Add the Middleware to Your Project: Open your settings.py file and add the QueryCounterMiddleware to your MIDDLEWARE list: pMIDDLEWARE = [ # Other middleware classes... 'path.to.QueryCounterMiddleware', ] Configure the Middleware Behavior: You can control the behavior of the QueryCounterMiddleware by setting the following variables in your settings.py file: SHOW_QUERY_COUNT: If True, the middleware will print the total number of queries executed during the request. SHOW_RAW_QUERY: If True, the middleware will print the raw SQL queries executed during the request. Optimising your Django queries with select_related and prefetch_related can significantly improve application performance, especially when working with complex relationships. The N+1 query issue, though common, is avoidable with a few best practices, leading to faster, more efficient applications and a better user experience. If you found this post helpful, connect with me on LinkedIn and follow me on GitHub for more insights, blogs, and stories on Django, backend development, and scalable application design. Let’s connect and keep learning together! LinkedIn: sunithvs GitHub: sunithvs Follow for more Django, backend tips, and development stories! In Plain English 🚀 Thank you for being a part of the In Plain English community! Before you go: Be sure to clap and follow the writer ️👏️️ Follow us: X | LinkedIn | YouTube | Discord | Newsletter | Podcast Create a free AI-powered blog on Differ. More content at PlainEnglish.io Optimising Django Queries to Overcome the N+1 Problem! was originally published in Python in Plain English on Medium, where people are continuing the conversation by highlighting and responding to this story.