Shantanu Bhusari
Backend engineer. I build distributed systems — microservices, event-driven pipelines, and real-time feeds that hold up in production.
Go · gRPC · AWS · Event-Driven Architecture
I'm a backend engineer who has designed and shipped production systems across e-commerce, fintech, and SaaS — 38+ microservices, 20+ Lambda functions, real users, real load. Most of the work is under NDA, but the architecture is mine to talk about.
I care about the decision behind the code. Why gRPC over REST for this service boundary. Why SQS is enough here and Kafka would be operational overhead. Why Fiber is the wrong framework inside a Lambda function — and how I learned that in production, diagnosed the cold start cost, and documented the fix. I try to know not just what to build but what not to reach for.
I'm open to backend engineering roles and contract work where architecture decisions actually matter — distributed systems, event-driven platforms, cloud-native infrastructure. If you're building at scale and want someone who has been there, let's talk.
Requirements are constraints, not features
What does the user experience? What breaks if this is slow or down? That framing drives every decision — service boundaries, data model, failure modes.
Stack should fit where the product is
A monolith ships faster at phase one. gRPC and event-driven pipelines pay off at real scale — not before. I make the call that fits now, not what might be needed later.
Ship with measured confidence
Cloud services are decided before writing the first line. Production config comes from what testing actually showed — traffic patterns, latency, error rates — not guesses.
My Default Stack
Production topology — 15-service grocery platform. gRPC for internal latency-sensitive calls; SQS for async event distribution; Lambda for compute-heavy background work.
Lambda functions taking 6–9 seconds to cold start in production. Root cause: wrong framework for the runtime. Here's the diagnosis and the fix.
How do you keep product feeds consistent across 15 services without a distributed transaction? Event-driven with SQS, eventual consistency by design.
Four-service platform: Go API gateway, Python AI worker, Go notification worker, Next.js client — each runtime owns the work it does best.
Twitter-scale feed delivery at small scale: CDC pattern, Redis sorted sets for O(log N) retrieval, eventual consistency between writes and reads — by design.
Freelance Backend Engineer
- –Designed service communication strategy across two e-commerce platforms (15 and 20 services): gRPC for internal latency-sensitive calls, AWS SQS for async event distribution — chosen over Kafka for operational simplicity at this scale.
- –Diagnosed Lambda cold start bottleneck (6–9s) caused by Fiber framework overhead (5–10MB) and MongoDB init cost in FaaS context. Documented root cause analysis and migration path to minimal Lambda handlers.
- –Built event-driven order state machine: payment_confirmed → delivery_shipped → invoice_generation_queued — explicit event transitions rather than mutable status fields, enabling reliable async fan-out to notifications, inventory, and invoicing services.
Freelance Backend Engineer
- –Delivered 3 production SaaS backends (automotive, fitness, logistics) using Node.js, Express.js, MongoDB — monolithic architecture appropriate for project scale and timeline.
- –Built serverless logistics platform on AWS Lambda: sub-10s execution for trip tracking and billing. Single-function deployment pattern (Express wrapped in serverless-http) — later recognised as an anti-pattern and documented in subsequent Lambda work.
B.Tech — Electronics & Telecommunication
- –Focus on distributed systems and algorithms
- –Built 10+ personal projects using JavaScript, TypeScript, Node.js, Express.js, and MongoDB
Coming soon
Experiments with new technology, problem statements I worked through, and insights from production systems.
Let's build something together.
I'm open to full-time roles, freelance projects, and interesting collaborations. Reach out and let's talk.