In contrast to Google, Microsoft, my former employer, probably has (had?) the best policy amongst big tech: moonlighting wasn't just tolerated, but actively encouraged! (...provided it runs on Windows, of course) ...because it's basically free training/experience if it means exposure to new APIs/platforms/libs/concepts - and definitely helps the morale levels of folks who love to build things but who ended-up with an extremely narrow-scoped job at the company (e.g. PMs who don't get to write code, or SDETs and SREs that only get noticed by management when they don't do their jobs).
During the launch of Windows 8, Msft's moonlighting policy was also part of their Windows App Store strategy: we were all heavily encouraged to make an "Windows Store App-app" so that SteveB could claim MS had N-many apps in its app-store, because that's how Leadership thought they could build credibility vs. Apple's established app store (of course, what actually ended-up happening was hundreds of cr-apps that were just WebView-wrappers over live websites).
In contrast, I understand Apple might have the worst moonlighting policy: I'm told that unless you directly work on WebKit or Darwin then you have to deactivate your GitHub account or else find yourself swiftly dragged onto the proverbial Trash.
> I wonder how long it'll be before all AI costs are flat unlimited monthly fees or even free across the board, without compromise.
That's already the case if you can self-host an LLM; you don't even need a mythical H200: gamer-grade GeForce cards can get you a long way there (if this page is to be believed: https://www.runpod.io/gpu-compare/rtx-5090-vs-h200 )
...after RAM prices return to normalcy, of course - and then wait another 2 or 3 generations of GPU development for a 96GB HBM card to hit the streets - and also assuming SotA or cloud-only LLMs don't experience lifestyle-inflation, but I assume they must, because OpenAI/Anthropic/Etc's business-model depends on people paying them to access them, so it's in their interests to make it as difficult as possible to run them locally.
That page compares models that easily fit inside the ram on either GPU. The biggest difference comes when one card can fit a model and the other cannot.
> Claude is pretty good at designing data models in my experience
Yesterday, Claude decided to go with nvarchar(100) for an IP address column instead of varbinary(16), and thinks RBAR triggers are just-as-good as temporal tables.
So, no. Claude is not good at designing data models in my experience.
Yes; more depression and anxiety about an uncertain future.
The SWE people I know at SW companies now heavily using these agents complain to me how their workday is nothing but code-reviews of the agents output and tedious prompting to prod it back into line; they say they don’t get to actually write code until they get home to work on their personal projects.
3 years ago I never would have believed this capability was possible; I’ve since adjusted my expectations to now assume that in another 3 years the models/agents will have improved enough to reduce the amount of code-review required, leaving us with precious little else to do for our shareholders, or the opposite: they don’t improve and we’re stuck doing thankless PR reviews until the end.
Please tell me where and how in this future I’m supposed to find satisfaction and pride in my work when what-gets-produced isn’t my own work anymore?
Not OP. Sounds like he was considered to be a manager and wasn't allowed to get into the weeds. So instead of just managing the off shore team, he wrote some of the code for them and then let them take credit for it.
Which also means that he wasn't doing his job (management) and instead micromanaging his staff by doing their job.
This is such a common problem with highly technical managers because they can't seem to understand how to change focus or scope and do their jobs better. Instead they fall back on trying to ship features thinking that this is productive and to pat themselves on the back for staying technical.
Yes and No. My job title was "Software Engineer," though my management chain told me my role was "Product Owner." Agile was fine at the beginning when it was a few people who knew what they were doing, but it's become a load of horse-shit.
The issue was that my management chain was concerned that my time was too valuable to be spent writing code. And there's a yes-and-no in this one. I was a reasonably well paid US-based software engineer, so yes, my time was valuable. And yes, some of the non-coding tasks I performed were probably more impactful than writing code. But... code + machine parsable specifications + docs + tests are very good ways of communicating exactly what you want.
I'm just sort of laughing thinking about what my old management chain would think if they knew our India based devs and I were using TLA+ as the core of our specification / documentation. Actually, I doubt they would understand it.
> though I left MSFT in 2017 so some things might have changed since.
Honestly, I struggle to think about what has actually changed between Office 2013 and Office 2024 (and their Office 365 equivalents); I know the LAMBDA function was a big deal, but they made the UI objectively worse by wasting screen-space with ever-increasingly phatter non-touch UI elements; and the Python announcement was huge... before deflating like a popped party balloon when we learned how horribly compromised it was.
...but other than that, Excel remains exactly as frustrating to use for even simple tasks - like parsing a date string - today just as it was 15 years ago[1].
reply