2026-05-08
10 min read

My First Month at Mynaksh: Two Things Nobody Asked Me to Fix

I joined as a senior engineer. In my first month I rewrote the data layer and built a versioned deploy system. Neither was on my JD. Here's why I did them anyway.

React NativeTanStackDevOps

I joined Mynaksh as a Senior Software Engineer. I expected to ship features. I did. But I also rewrote the data layer of two React Native apps and built a versioned deploy system, neither of which was in my onboarding doc.

This is a post about doing platform work as an IC — when, why, and how to make sure you don't get crushed by it.

The Data Layer Was Bad

Walking into the codebase, the first thing I noticed was that every screen had the same shape:

useEffect(() => {
  setLoading(true);
  axios.get('/some/endpoint')
    .then(res => setData(res.data))
    .catch(err => setError(err))
    .finally(() => setLoading(false));
}, []);

useState for the data. useState for loading. useState for error. Manual cache management — which is to say, no cache management. Stale-while-revalidate by hoping for the best.

This is fine if you have three screens. We had a lot more than that. And it was showing up as:

  • Refetches on every navigation. The same endpoint hit five times in a session because each screen mounted and ran its own useEffect.
  • Inconsistent loading states. Each screen had its own loading shape, so the app felt jittery as users moved through it.
  • Stale data after mutations. A user updates their profile; the next screen still shows the old data because there's no shared cache to invalidate.
  • No retry logic. A failed network request was just a failed render.
  • No request deduplication. Two components on the same screen would each fire the same request independently.

I waited about a week before saying anything. I wanted to make sure I wasn't missing context — sometimes a codebase looks the way it looks for a real reason.

There wasn't a real reason. It was legacy. The team agreed it had to change.

The TanStack Migration

I picked TanStack Query (formerly React Query). The reasons were boring:

  • It solves cache coordination, request deduplication, retry, and stale-while-revalidate out of the box.
  • It has solid React Native support.
  • One engineer on the team had used it before, so we weren't starting from zero on familiarity.

But "it solves cache coordination" is doing a lot of work as a phrase. The actual case for the migration is the list of features the old useEffect + axios setup didn't have, and would have to reimplement at every call site to match. It's worth listing them, because cumulatively they're the difference:

  • Query keys. Every fetch is identified by a structured key (['user', userId], ['astrologer-list', filters]). The cache is keyed by that. Two components asking for the same data share one cache entry and one network request. Invalidation, refetching, and updates all work off the same key.
  • Request deduplication. If five components mount at the same time and all want ['astrologer', astrologerId], TanStack fires one request, not five. The other four subscribe to the same in-flight promise. This alone removed most of our redundant traffic.
  • Stale-while-revalidate. Each query has a staleTime and a gcTime. Within staleTime, the cache returns instantly. After it, the cache returns the stale value immediately and fires a background refetch. The user sees a fast UI and gets fresh data as soon as the network catches up — no spinners on data they've already seen.
  • Background refetching. Auto-refetch on window focus, on reconnect, and on mount when stale. Defaults are sensible (focus refetch is on for web, off for React Native, where "focus" is fuzzier). Easy to tune per query.
  • Retry with exponential backoff. Failed queries retry automatically, with a tunable attempt count and delay function. Compare to the old code, where a failed axios.get was just a failed render.
  • Mutations with cache invalidation. useMutation fires writes and gives a clean way to invalidate or update affected query keys on success. Update a profile; the next read of ['user', userId] triggers a refetch automatically. The "stale data after mutation" class of bug is gone by construction.
  • Optimistic updates. Mutations support an onMutate hook for optimistic UI: update the cache before the server responds, roll back if the request fails. Right tool for like-button-style interactions where users expect instant feedback.
  • Parallel and dependent queries. A screen that needs three pieces of data fires three useQuery calls; they run in parallel. A query that depends on another (user → user's astrologer history) uses the enabled flag to wait for the upstream value. No hand-rolled coordination.
  • Pagination and infinite scroll. useInfiniteQuery handles cursor-based pagination, page caching, and the "load more" UX without extra plumbing.
  • Query cancellation. TanStack integrates with AbortController. If a component unmounts mid-fetch, the request is canceled. With our old setup, in-flight requests would resolve into the void and occasionally log warnings about state updates on unmounted components.
  • Structural sharing. When a refetch returns "the same" data with one nested field changed, TanStack preserves object identity for unchanged sub-trees. React's referential-equality checks short-circuit, so unrelated components don't re-render. The kind of perf win that's invisible until you measure it and obvious afterward.
  • DevTools. A floating panel showing every query in the cache — current state (fresh / stale / inactive), data, last-fetch timestamp. Single most useful thing for debugging data flow. We use it every day.
  • Garbage collection. Inactive queries are evicted after gcTime. The cache doesn't grow forever. With our old setup, the "cache" was whatever components happened to still be mounted — eviction was incidental.

The headline win isn't any one of these. It's that the whole thing is one mental model: every fetch is a query keyed by an array, every write is a mutation, the cache is the single source of truth, refetching is automatic. The old code had every screen reinventing this in its own way, badly.

The migration approach was incremental:

  1. Set up a QueryClient and wrap both apps' navigation roots in QueryClientProvider.
  2. Build a thin layer of typed query hooks — useUserProfile, useAstrologerList, useWalletBalance, etc. — each one mirroring an existing axios call site.
  3. Migrate screen by screen. The "old way" hooks and the "new way" hooks coexisted during the transition. Screens that hadn't been touched kept working.
  4. Delete the old useEffect + axios machinery as each screen flipped over.

We got about 60% of the API surface migrated in three weeks. The rest is in flight as features get touched — every PR that modifies a screen also flips its API calls to TanStack.

The metrics we tracked, before and after, on the migrated screens:

  • API requests per session: dropped roughly 40%, mostly from deduplication.
  • Cold-start to interactive time: improved noticeably on screens that had previously fired four sequential requests; TanStack's parallelization plus better cache hits shaved seconds.
  • Lines of code per screen: down 15-25 lines on average. Across the migrated 60%, that's several thousand lines of boilerplate gone.
  • Cache-related bug reports: dropped to roughly zero post-migration. Stale-data-after-mutation bugs were a recurring class before; they basically stopped.

The Deployment Story

The second thing I noticed: there was no real release process.

When the team wanted to ship a new build, someone bumped the version and ran the build. If something broke in production, the rollback procedure was "build the previous commit again" — which takes 30+ minutes for a React Native app and is full of footguns. You can't reliably reproduce yesterday's build if any dependency has shifted underneath.

So I built a versioned deploy system on top of stage and release branches.

The shape of it:

Stage branch. Continuously built from main. The internal QA build always points at stage. Bugs caught here never reach users. Anyone on the team can install the stage build at any time.

Release branches. Cut from stage when we're ready to ship. Each release branch is tagged with a semantic version — release/1.42.0. Builds from release branches go to the App Store and Play Store via staged rollout. Once cut, a release branch is frozen except for hotfixes.

Version-tagged monitoring. Every analytics event, every error, every key metric is tagged with the build version it came from. Crash rates and error rates are graphed per version. A new version that spikes errors gets caught at 5% rollout instead of 100%.

Versioned rollback. Because builds are versioned and deployed via the stores' staged-rollout features, "rolling back" is "halt rollout of v1.42.0 in the store and re-promote v1.41.0 to 100%." No rebuild required. Minutes to revert, not hours.

The CI pipeline knows about three branch types:

  • main — runs lints, tests, type checks. No build artifacts.
  • stage — auto-builds the QA app and pushes to internal distribution on every merge.
  • release/* — builds production artifacts, signs, uploads to App Store Connect / Play Console as a staged rollout.

Branch protection on release/* is strict: only the release engineer (rotating role) can push, and pushes require a green CI plus an approved PR. The rollout halt is a one-line script that toggles the staged rollout percentage to zero on each store. The dashboard layout is a Grafana board with one row per recent version showing crash-free sessions, error rate, and key business metrics, all filtered by the app_version tag.

The biggest unlock was the version-tagged monitoring. Before, when something broke, the question was "is it broken for everyone or just some users?" After, the question was "which version is broken?" — and the answer was on a dashboard.

Doing Platform Work as an IC

A note on the meta-question: should a senior IC be doing this kind of work?

Sometimes yes, sometimes no. If you're at a company with a TL or platform engineer actively owning the data layer and the deploy process, get out of their way. If you're at a startup where nobody owns those layers and you have the experience to fix them, doing the work is fine — useful, even.

A few things that helped me not get crushed by it:

  • Both projects started as conversations, not as faits accomplis. "Hey, I noticed we don't have a real release process — would it be useful if I prototyped one?" gets a yes much more often than a surprise PR titled "implement versioned deploys."
  • Neither blocked feature work. I did them in parallel with my actual sprint commitments, not instead of them. If a cleanup project starts eating feature delivery, kill it or scope it down.
  • Both got a real owner once they were running. I built the initial versioned deploy system, then handed it to the engineer who ended up most adjacent to it. Building something and owning it forever are different commitments.

The advice condenses to: do the work, don't claim the title. The title gets earned by consistently doing the work over time, not by one cleanup sprint. Doing platform work to prove you can is fine. Doing it to be promoted is a different thing and reads differently.

What I'd Do Differently

Two things, in retrospect:

  • Smaller proof of concept first. I built the TanStack scaffolding all at once and migrated 10 screens before showing the team the pattern. A smaller PoC on three screens, with a clearer handoff to the team for the rest, would have spread ownership earlier.
  • Pair the deploys work. The versioned deploy system was a solo build for too long. I should have brought another engineer in by week two; we ended up with a system only I deeply understood, which is a bus-factor risk for a release-critical piece of infrastructure. We've since been pairing on every change to it; should have started that way.