Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

[flagged]


You’re describing task reallocation, but the bigger second-order effect is where the firm can now source the remaining human judgment.

AI reduces the penalty for weak domain context. Once the work is packaged like that, the “thinking part” becomes far easier to offshore because:

- Training time drops as you’re not teaching the whole craft, you’re teaching exception-handling around an AI-driven pipeline.

- Quality becomes more auditable because outputs can be checked with automated review layers.

- Communication overhead shrinks with fewer back-and-forth cycles when AI pre-fills and structures the work.

- Labor arbitrage expands and the limiting factor stops being “can we find someone locally who knows our messy process” and becomes “who is cheapest who can supervise and resolve exceptions.”

So yeah, the jobs mostly remain and some people become more valuable. But the clearing price for that labor moves toward the global minimum faster than it used to.

The impact won’t show up as “no jobs,” it is already showing up as stagnant or declining Western salaries, thinner career ladders, and more of the value captured by the firms that own the workflows rather than the people doing the work.


Isn't that what a well run company does when creating a process? Bureaucracy and process, reduces the penalty of weak domain context and in fact is designed to obviate that need. It "diffuses" the domain knowledge to a set of specifications, documents, and processes. AI may be able to accelerate it, or subsume that bureaucracy. But since when has the limiting factor been "finding someone locally who knows the process?" Once you document a process, the power of computing means you can outsource any of that you want no? Again, AI may subsume, all the back office or bureaucratic office work. Perhaps it will totally restructure the way humans organize labor, run companies, and coordinate. But that system will have to select for a different set of skills than "filling out n forms quickly and accurately." The wage stagnation etc etc. predates AI and might be due to other structural factors.


> Isn't that what a well run company does

How many of those do you see around?


I bet we're about to see a lot of 10-person $100M+ ARR companies emerge. That's a scale where teams can be tight and excel.


If you can build that with AI, then 9 people with AI can probably wipe out that company, only to be wiped out by 8 people with AI…and so on.


Not necessarily. That's the old "I made Twitter in a weekend" joke.

That's not because you can technically replicate a product that your company will be successful. What makes a company successful are sales forces, internal processes and luck. Both are extremely difficult to replicate because sales forces are based on a human network you have to build, internal processes are either organic or kept secret, and luck can only be provoked by staying alive long enough, which means you need money.


massively underrated comment detected.


when.

people have been saying that since 2022.

when and how. hmm??

show your work.

or is this just more slype being spewed...


I think something around that scale (say maybe 20 employees, but definitely not hundreds) was possible even before LLM got popular, but the people involved needed to be talented and focused. I'm not sure if AI will really change that though.


In 2014, Facebook acquired WhatsApp for $19B and they had 55 employees


Correction: 55 grossly underpaid employees!


"it is already showing up as stagnant or declining Western salaries"

Real median salary, and real median wages are both rising for the last couple years. Maybe they would have risen faster if there was no AI, but I don't think you can say there has been a discernible impact yet.


I’d like a source for that. College graduates are no longer at an employment advantage compared to their uneducated peers. The average age of a new hire increased by 2 years over the past 4 years.

Young people in the west have definitely seen declining salaries, if only by virtue of the fact that they’re not being offered at all.

https://www.clevelandfed.org/publications/economic-commentar...

https://www.reveliolabs.com/news/social/65-and-still-clockin...


Real wage growth has been positive for the last 3 years:

https://data.bls.gov/timeseries/CES0500000013?output_view=pc...


I don't think that's true, if you trust gemini at least.. "In 2025, U.S. software engineer pay is barely keeping pace with inflation, with median compensation growing 2.67% year-over-year compared to 2.7% inflation. While salaries held steady or increased during the 2021-2023 inflationary period, many professionals reported that real purchasing power remained stagnant or dipped, making it difficult to get ahead. "


> AI reduces the penalty for weak domain context

This is why (personal experience) I am seeing a lot of FullStack jobs compared to specialized Backend, FE, Ops roles. AI does 90% of the job of a senior engineer (What the CEOs believe) and the companies now want someone that can do the full "100" and not just supply the missing "10". So that remaining 90 is now coming from an amalgamation of other responsibilities.


In my mind we will have a bimodal set of skills in software development, likely something like a product engineer (an engineer who is also a product manager-- this person conceptualizes features and systemically considers the software as a whole in terms of ergonomics, business sense, and the delight in building something used by others) and something like a deep-in-the-weeds engineer (an engineer who innovates on the margins of high performance, tuning, deep improvements to libraries and other things of that nature). The former is needing to skill in rapid context switching, keeping the full model of customer journey in their minds, while also executing on technical rigor enough to prevent inefficiencies. The latter will need to skill in being able to dive extremely deeply into nuanced subjects like fine-tuning the garbage collector, compiler, network performance, or internal parts of the DOM or OS or similar.

I would expect a lot of product engineering to specialize further into domains like healthtech, fintech, adtech, etc. While the in-the-weeds engineering will be platform, infra, and embedded systems type folks.


Can I take a guess that you believe you will speciate into the former?


Actually, ideally I'd love to dig deep into and specialize in database management systems internals. I think data engineering in general is the underspoken but fundamental necessity to any sort of application, AI or otherwise, but especially any concept of a data warehouse.


Funny you ignored the third order effect where the efficiency really does enable lower cost


Which is never realized. Price points don't decrease. Profit taking increases.


[flagged]


What do you mean when you say “AI is reducing training overhead”?


Basically, "You don't have to understand how this works, just push this button when x, or flip this switch when y."

I don't think the impact will be quite as large as some are saying here, but it won't be minimal either.


> automation tools ... eliminates the boring part of the job, and then the job description shifts.

But the job had better take fewer people, or the automation is not justified.

There's also a tradeoff between automation flexibility and cost. If you need an LLM for each transaction, your costs will be much higher than if some simple CRUD server does it.

Here's a nice example from a more physical business - sandwich making.

Start with the Nala Sandwich Bot.[1] This is a single robot arm emulating a human making sandwiches. Humans have to do all the prep, and all the cleaning. It's slow, maybe one sandwich per minute. If they have any commercial installations, they're not showing them. This is cool, but ineffective.

Next is a Raptor/JLS robotic sandwich assembly line.[2] This is a dozen robots and many conveyors assembling sandwiches. It's reasonably fast, at 100 sandwiches per minute. This system could be reconfigured to make a variety of sandwich-format food products, but it would take a fair amount of downtime and adjustment. Not new robots, just different tooling. Everything is stainless steel or food grade plastic, so it can be routinely hosed down with hot soapy water. This is modern automation. Quite practical and in wide use.

Finally, there's the Weber automated sandwich line.[3] Now this is classic single-purpose automation, like 1950s Detroit engine lines. There are barely any robots at all; it's all special purpose hardware. You get 600 or more sandwiches per minute. Not only is everything stainless or food-grade plastic, it has a built-in self cleaning system so it can clean itself. Staff is minimal. But changing to a product with a slightly different form factor requires major modifications and skills not normally present in the plant. Only useful if you have a market for several hundred identical sandwiches per minute.

These three examples show why automation hasn't taken over. To get the most economical production, you need extreme product standardization. Sometimes you can get this. There are food plants which turn out Oreos or Twinkies in vast quantities at low cost with consistent quality. But if you want product variations, productivity goes way, way down.

[1] https://nalarobotics.com/sandwich.html

[2] https://www.youtube.com/watch?v=_YdWBEJMFyE

[3] https://www.youtube.com/watch?v=tRUfdBEpFJg


> But the job had better take fewer people, or the automation is not justified.

In many cases, this is a fallacy.

Much like programming, there is often essentially an infinite amount of (in this case) bookkeeping tasks that need to be done. The folks employed to do them work on the top X number of them. By removing a lot of the scut work, second order tasks can be done (like verification, clarification, etc.) or can be done more thoroughly.

Source: Me. I have worked waaaay too much on cleaning up the innards of less-than-perfect accounting processes.


Well said. It’s like they think that the only thing automation is good for is cutting costs. You can keep the same staff size but increase output instead, creating more value.


"They" don't think the only thing automation is good for is cutting costs. Management thinks the only thing worth doing, at all, using any means, is cutting costs.


Well that’s clearly false, and obviously “they” refers to people that include management lol


The firm simply assumes that if the top X was sufficient in the past, it is still sufficient now.

From the perspective of modern management, there's really no reason to keep people if you can automate them away.


> The firm simply assumes that if the top X was sufficient in the past, it is still sufficient now.

> From the perspective of modern management, there's really no reason to keep people if you can automate them away.

These are examples of how bad management thinks, or at best, how management at dying companies think.

Frankly, this take on “modern management” is absurd reductionist thinking.

Just a few points about how managers in successful companies think:

- Good employees are hard to find. You don’t let good people go just because you can. Retraining a good employee from a redundant role into a needed role is often cheaper than trying to hire a new person.

- That said, in any sufficiently large organization, there is usually dead weight that can be cut. AI will be a bright light that exposes the least valuable employees, imho.

- There is a difference between threshold levels of compliance (e.g., docs that have to be filed for legal reasons) and optimal functioning. In accounting, a good team will pay for themselves many times if they have the time to work on the right things (e.g., identifying fraud and waste, streamlining purchasing processes, negotiating payment terms, etc.). Businesses that optimize for making money rather than getting a random VP their next promotion via cost-cutting will embrace the enhanced capability.

Yes, AI will bring about significant changes to how we work.

Yes, there will be some turmoil as the labor market adjusts (which it will).

No, AI will not lead to a labor doomsday scenario.


> - Good employees are hard to find. You don’t let good people go just because you can. Retraining a good employee from a redundant role into a needed role is often cheaper than trying to hire a new person.

Your best employees at a given price though.

Part of firm behavior is to let go of their most expensive workers when they decide to tighten belts.

Unless your employee is unable to negotiate, lacking the information and leverage to be paid the market rate for their ability. Your best employees will be your more expensive, senior employees.

Everything is at a certain price. Firing your best employee when you can get the job done with cheaper, or you can make do with cheaper, is also a common and rational move.

While I agree it’s unlikely that there won’t be a labour doomsday scenario, I think ann under employment scenario is highly likely. Offshoring ended up decimating many cities and local economies, as factory foremen found new roles as burger flipper.

Nor do people retrain into new domains and roles easily. The more senior you are, the harder it is to recover into a commensurately well paying role.

AI promises to reduce the demand for the people in the prime age to earn money, in the few high paying roles that remain.

Not the apocalypse as people fear, but not that great either.


Is Microsoft a "dying company"? The stock market certainly thinks otherwise.


> Is Microsoft a "dying company"? The stock market certainly thinks otherwise.

This is the entire sentence that I wrote that you seem to be referring to:

“These are examples of how bad management thinks, or at best, how management at dying companies think.”

MS falls under the first part — bad management. Let literacy be your friend.

To elaborate, yes, I think that MS is managed incredibly poorly, and they succeed despite their management norms and culture, not because of it. They should be embarrassed by their management culture, but their success in other areas of the company allows the bad management culture to persist.


What successful tech companies don't have "bad management", then?


See self checkouts at supermarkets, with teams reduced to when checkouts go bad, or filling the shelves.

Not only do the prices increase, now we get pushed to their jobs for free, while the chains layoff their employees.

Hence why I usually refuse to use them if I have to take some additional extra time queuing.


I have mixed feelings on these.

For a full cart, I expect a cashier or to be available.

If I have 3-5 items, I’d rather do it myself than wait.

That said, even 20-30 years ago, long before self checkout, at places like WalMart, one could wait 15-20 minutes in line. They had employees but were too cheap to have enough. They really didn’t care.

I don’t even understand how that math works. I might have kept going there if they had a few extra lowly paid cashiers around.


> But the job had better take fewer people, or the automation is not justified.

Not necessarily. Automation may also just result in higher quality output because it eliminates mistakes (less the case with "AI" automation though) and frees up time for the humans to actually quality control. This might require the people on average to be more skilled though.

Even if it only results in higher output volume you often have the effect that demand grows also because the price goes down.


There's a classic book on this, "Chapters on Machinery and Labor" (1926). [1]

They show three cases of what happened when a process was mechanized.

The "good case" was the Linotype. Typesetting became cheaper and the number of works printed went up, so printers did better.

The "medium case" was glassblowing of bottles. Bottle making was a skilled trade, with about five people working as a practiced team to make bottles. Once bottle-making was mechanized, there was no longer a need for such teams. But bottles became cheaper, so there were still a lot of bottlemakers. But they were lower paid, because tending a bottle-making machine is not a high skill job.

The "bad case" was the stone planer. The big application for planed stone was door and window lintels for brick buildings. This had been done by lots of big guys with hammers and chisels. Steam powered stone planers replaced them. Because lintels are a minor part of buildings, this didn't cause more buildings to be built, so employment in stone planing went way down.

Those are still the three basic cases. If the market size is limited by a non-price factor, higher productivity makes wages go down.

[1] https://www.jstor.org/stable/1885817?seq=1


I think this is probably the trajectory for software development because while people claim there is a potentially unlimited demand that really only occurs at rock bottom prices.


In many cases you can saturate the market. The stone planer examples is an early case. Cheaper lintels don't mean more windows, because they are a minor part of the cost. Cheaper doorknobs do not generate demand for more doorknobs, because the market size is the number of doors. Cheap potatoes, soy, corn, and cheese have saturated their markets - people can only eat so much.

This might also be true of web analytics. At some point, more data will not improve profitability.


No? You don’t only gain justification for automation by cutting costs. You can gain justification by increasing profits. You can keep the same amount of people but use them more efficiently and you create more total value. The fact you didn’t consider this worries me.

Also the statement “show why automation hasn’t taken over” is truely hysterically wrong. Yeah, sure, no automation has taken over since the Industrial Revolution


You can increase profits by cutting costs. It is remarkably easier to do in the short term. And even if you choose not to downsize you can drop/stagnate wages to gain from the fact everyone else is downsizing.


None of what you just said is anything I hadn’t considered, and also none of it negates anything I said.


The Nala bot reminded me of the guys at Felipe's in Cambridge MA. When they're building burritos during dinner rush, you'd swear to god that multiple different ingredients were following a ballistic trajectory toward the tortilla at any given time. If there was a salsa radar it would show multiple inbounds like the Russkies were finally nuking us.

ETA: It didn't remind me of this because the robot is good at what it does. It reminded me of just how far away from human capabilities SOTA robotic systems are.


That’s one use case that is very hard to automate right now yes.


Thank you. Having automation means process control, which means handling sources of variation for a defined standard/spec. The claims of all jobs being done by AI end up also assuming that we will end up with factories running automated assembly lines of thought.

I have been losing my mind looking at the output of LLMs and having to nail variability down.


I recently did a contract at medium sized business with a large retail and online business that had a CFO and several accountants / bookkeepers. You're describing a situation where that CFO only needs two or three accountants and bookkeepers to run the business and would lay off two or three people.

It IS about headcount in a lot of cases.


Or they’d keep the same number of people and increase total value output. Businesses tend to like the idea of growth more than cost cutting after all.


People don’t suddenly eat more food due to AI. That are a lot of industries with bounded total demand.


That’s true, however I’m truely glad 70% of the population isn’t working in food production anymore, those were the bad old times.


> Businesses tend to like the idea of growth more than cost cutting after all.

I would offer as counter to this view: massive layoffs across the early adopters of AI, the tech giants.


However good growth is finite unless you also believe in immigration and debt


Well all of that is false and tbh sounds a bit sus


Infinite growth is a childish belief


[flagged]


I keep seeing that small teams or individuals are getting most of the productivity gains from new ai.

Small teams or individuals that learn to use ai well can outpace larger teams, even if the larger teams also use ai, because communication / coordination overhead grows faster than team size. Tasks that before needed large teams to get done, can now be done by smaller teams.

Large Knowledge work teams have lost their competitive advantage.

I see this as a business opportunity for small actors. Every large knowledge work team that doesn't quickly adapt and downsize itself, is now something you can disrupt as a small team or individual.


Another component or view of this is that automating the rote work is "eliminating the boring parts" (I love this and have worked extensively on this) but it is also eliminating the less cognitively demanding work.

Once you have automated extensively, all of the remaining work is cognitively demanding and doing 8 hours of that work every day is exhausting.


I frame the shift more like this:

Systems engineering is an extremely hard computer science domain with few engineers either interested in it, or good at it.

Building dashboards is tedious and requires organizational structure to deliver on. This is the bread and butter of what agents are good at building right now. You still need organization and communication skills in your company and to direct the coding agents towards that dashboard you want and need. Until you hit a implementation wall and someone will need to spend time trying to understand some of the code. At least with dashboards, you can probably just start over from scratch.

It's arguably more work to prompt in english to an AI agent to assist you in hard systems problems, and the signals the agent would need to add value aren't readily available (yet?!). Plus, there's no way systems engineers would feel comfortable taking generated code at face-value. So they definitely will spend the extra mental energy to read what is output.

So I don't know. I think we're going to keep marching forward, because that's what we do, but I also don't think this "vibe-coded" automated code generator phase we're in right now will ultimately last. It'll likely fall apart and the pieces we put back together will likely return us to some new kind of normal, but we'll all still need to know how to be damn good software engineers.


I understand where you're coming from, and think there is something missing in your final paragraph that I'm curious to understand. If LLMs do end up improving productivity, what would make them go away? I think automated code generators are here until something more performant supersedes them. So, what in your mind might be possibilities of that thing?


Well I guess I no longer believe that long term, all this code generation would make us more productive. At least not how the fan favorite claude-code currently does it.

I've found some power use cases with LLMs, like "explore", but everyone seems misty eye'd that these coding agents can one-shot entire features. I suspect it'll be fine until it's not and people get burned by what is essentially trusting these black boxes to barf out entire implementations leaving trails of code soup.

Worse is that junior engineers can say they're "more productive" but it's now at the expense of understanding what it is they just contributed.

So, sure, more productive, but in the same way that 2010s move fast and break things philosophy was, "more productive." This will all come back to bite us eventually.


>> The thing I keep seeing firsthand is that automation doesn't eliminate the job - it eliminates the boring part of the job, and then the job description shifts.

No, not necessarily. There are different kinds of automation.

Earlier in my career I sold and implemented enterprise automation solutions for large clients. Think document scanning, intelligent data extraction and indexing and automatic routing. The C-level buyers overwhelmingly had one goal: to reduce headcount. And that was almost always the result. Retraining redundant staff for other roles was rare. It was only done in contexts where retaining accumulated institutional knowledge was important and worth the expense.

Here's the thing though: to overcome objections from those staff, whom we had to interview to understand the processes we were automating, we told them your story: you aren't being replaced, you're being repurposed for higher-level work. Wouldn't it be nice if the computer did the boring and tedious parts of your job so that you can focus on more important things? Most of them were convinced. Some, particularly those who had been around the block, weren't.

Ultimately, technologies like AI will have the the same impact. They weren't quite there yet, but I think it's just a matter of time.


> The C-level buyers overwhelmingly had one goal: to reduce headcount.

For many businesses this is the only way to significantly reduce costs.


This is exactly why I'm not that worried. I've noticed that AI is great at the parts of software engineering that I'm bad at, like implementing a new unfamiliar library, deploy pipelines, infra configuration, knowing specific technical details and standard patterns.

It's bad at the stuff I'm good at: thinking about the wider context, architecture, how to structure the code in an elegant, maintainable way, debugging complex issues, figuring out complex algorithms. I've tried using AI for those things, but it sucks at them. But I've also used it to solve configuration problems that I doubt I'd been able to figure out on my own.


one reason why i started enjoying programming less and less was because i felt i was spending 95% of the time on the problems you described which i felt were more or less the same over the years and werent complicated but annoying. unfortunately or fortunately, after coding for over 15 years for the past 4 months ive only been prompting and reading the outputted code. it never really feels like writing something would be faster than just prompting, so now i prompt 2-3 projects at the same time and play a game on the side to fill in the time while waiting for the prompts to finish. its nice since im still judged as if its taking the time to do it manually but if this ever becomes the norm and expectations rise it would become horribly draining. mentally managing the increased speed in adding complexity if very taxing for me. i no longer have periods where i deep dive into a problem for hours or do some nice refactoring which feels like its massaging my brain. now all i do is make big decisions


This is also my experience. I am personally really happy about it. I never cared about the typing part of programming. I got into programming for the thinking about hard problems part. I now think hard more than ever. It's hard work, but it feels much more fulfilling to me.


I miss the deep dives. I make time for them again. A month or two ago, I was working on a really complex problem where I relied way too much on AI, and that reliance kept my thinking about the problem relatively shallow, which meant that while I understood the big picture of the problem, I didn't really understand the intricacies. And the AI didn't either; I must have wasted about a week just trying to get the AI to solve it.

Eventually, I switched. I stopped using the AI in my IDE, and instead used a standalone Copilot app that I had to actually explain the problem. That forced me to understand it, and that helped me solve it. It demoted the AI to an interactive rubber duck (which is a great use for AI). That moment when I finally started to understand the real problem, that was great. That's the stuff I love about this work, and I won't let the AI take that away from me again.


I would imagine, in this example, that the fact that you put in the numbers yourself gives you a mental map of where the numbers are and how they relate to each other, that having AI do it for you doesn't give you.

You could stare at a large sheet of numbers for a long time, and perhaps never get the kind of context you gained by entering them.

Additionally, if there was a mistake, it may not be as noticeable.


> The bookkeeper is still there, still needed, but now they're doing the part that actually requires judgment.

The argument might be fundamentally sound, but now we're automating the part that requires judgement. So if the accountants aren't doing the mechanical part or the judgement part, where exactly is the role going? Formalised reading of an AI provided printout?

It seems quite reasonable to predict that humans just won't be able to make a living doing anything that involves screens or thinking, and we go back to manual labour as basically what humans do.


Even manual labor is uncertain. Nothing in principle prevents a robot from being a mass produceable, relatively cheap, 24/7 manual worker.

We've presumably all seen the progress of humanoid robotics; they're currently far from emulating human manual dexterity, but in the last few years they've gotten pretty skilled at rapid locomotion. And robots will likely end up with a different skill profile at manual tasks than humans, simply due to being made of different materials via a more modular process. It could be a similar story to the rise of the practical skills of chatbots.

In theory we could produce a utopia for humans, automating all the bad labor. But I have little optimism left in my bones.


By what logic are the "manual labor" jobs available? And if you're right and they somehow are, isn't that just another way of saying humanity is enslaving itself to the machines?


You’re not taking into account that a successful bookkeeper may have hired someone like a new grad to take the drudgery off of their hands and now they can just do it themselves.


I'd imagine that when the 80% of less productive time is automated, the market doesn't respond by demanding 80% more output. There's just 20% as much work either making this a part time job or more likely a much smaller workforce as the number of man*hours demanded by the market greatly reduces.


Scope will increase.

Good accounting teams will have more time and resources to do things like identify fraud, waste, duplicated processes, etc. They will also have time to streamline/optimize existing practices.

Good teams will earn many multiples of their cost in terms of savings or increased earnings.

There may be increased competition for the low-cost “just meet the legal compliance requirements” offerings, but any business that makes money and wants to make more will gladly spend more than the minimum for better service.


Let’s do some math.

He does 100 units of product per 100 units of time.

80 units of time on data entry 20 units of time on “thinking”

We now automatise the task in such a way that ratios flip:

So now we do 20 units of time for 100 products. Let’s assume we use same thinking as before of 20. So we use 40 units of time to produce 100 units of product.

Now let’s assume it’s linear growth:

We use 40 units of time for each task and we produce 200 units of product for 80 units of time.

Let’s now do 50 units of time for each and produce 250 units of product with same time as before. It’s definitely not the same.

you either work 40 and produce the same or work the same and produce 250. NOT THE SAME


The desktop PC was the same - everyone said that it was going to wipe out jobs, when the main thing it wiped out was filing cabinets.

AI commentators seem to overlook that one of the primary functions of capitalism is to keep people in busywork: what David Graber called Bullshit Jobs. So AI is going to automate most of the bullshit away but the bullshit employees will keep working, because there wasn’t much need for them in the first place.


You are describing in cases where small businesses have little headcount and cant shrink any further.

But in a much bigger picture AI is akin to what Excel did to a building of people doing accounting and bookkeeping. Except at the time there were plenty of opportunities for those people doing different thing in the market. Something that economists constantly burp about.

I dont see this now. For whatever reason the economy has so much more bullshit job than those days, despite computer and technology we have far more administration hurdle and employees than before. And 70% of those will go away in the next 5 years. We automated those needless complexity. It isn't clear to me in a world today where many jobs are specialised, there is enough time and room for them to relearn the skills required for other job opportunities, if there are that many to fill the ones who were laid off.


Accountants will still exist, but we'll need fewer of them at any given time. In your example of flipping the 80/20 ratio, you are implying that each accountant would be able to (theoretically) handle a 5x workload with AI making up the gap.

Perhaps in reality more like a 3x advantage, due to human inefficiencies and the overhead of scaling the business to handle more clients.

Given that, 3x increase of productivity implies we either need 1/3 the accountants, or the accountancy supply brings down prices and more clients start hiring accountants due to affordability.


(I work in house handling the tax function.)

If AI tools worked, they would eliminate the bookkeepers. Their job is data entry and validation.

But bookkeeping is extremely important. Bad bookkeeping has killed more companies than bad accounting. Without proper books, the accounting, finance, and tax teams are just cosplaying.


That would happen if the AI were good and consistent at doing the mechanical part. Which it is, sometimes.

I've found it's better to have the bot write a program to do the mechanical part that trusting it not to have a lazy day.


I am not in the sams context. As we shift in job roles lots of people will get uprooted and it will have a negative impact on life in a general sense.

Similar to any industrial advancement in human history.


> And the people who were good at the thinking part but slow at data entry are suddenly the most valuable people in the room.

No, they aren't. They are now competing with everyone - the slow thinkers, the barely-conscious thinkers, the erratic thinkers, the "unable to reach a conclusion" thinkers as well as the people quick at "data entry", with the caveat that the people quick at "data entry" are almost certainly going to be better thinkers than those that weren't quick at data entry.

IOW, you think AI isn't coming for some specific class of programmers, but you are wrong. You and the "other types" will continue this debate in the soup kitchen.


Both jobs are going away. Prepare.


I'm not very familiar with the field on a practical basis.

What parts of the job require judgement that is resistant to automation? What percentage of customers need that?

If the hours an accountant spends on a customer go from 4 per month to 1, do you reckon they can sustainably charge the same?


Why would better efficiency mean they have to charge less?


Because your competitor will double their number of customers, and halve their prices— forcing you to do the same.


So then everyone would continue earning the same as before.


Yeah bro, its been three years. We are just beginning. We will replace the vast majority of professional service workers in 10 years including lawyers as Ai shifts to local and moves away from the cloud.


If we wipe out the vast majority of white collar jobs in just 10 years, we’re talking complete economic collapse.

No society can possibly absorb that kind of disruption over such a short time.

Also even assuming AI could completely replace lawyers. Lawyers control the legislature. They may not be able to stop your local model from telling you how to do something, but they can stop you from actually doing it without a lawyer.


Even subway train operators in NYC, whose job can be safely automated away, and has been for like 20 years, were able to legally mandate their jobs. I bet lawyers will, too. But the numbers of junior partners, and of paralegals, will dwindle.


But then will we not need more judges and courts?


Correct, which is why we will have the first worldwide revolution as people realize their democracies are fake, they are simply enslaved by capitalists; which is exactly what they told us Commies would do.


The chances of all of those revolutions not touching off world war 3 and decimating infrastructure and trade to the point that we can’t produce the chips to run AI is what now?


Very low. I didn't say French Revolution. Political swing away from capitalism towards socialism.


You said “the fist world wide revolution”, which implies a bit more than a swing toward socialism.

My guess is we have a low chance of peacefully transitioning to socialism if we lose 40% of jobs in under a decade.

Someone is going to take advantage of that the way Hitler did.


I'm glad we have intelligent, mature, uncorrupted politicians who will be able to work together to make sure that this doesn't cause a depression so profound that the entire economy ceases to be viable.

Oh..


Hey. I voted for the other one.


Lawyers, doctors, and accountants aren't just paid to be knowledge workers.

They're paid to accept responsibility for when they fuck up (even when it's not intentional).

Programmers aren't held responsible for their screw-ups. If they were, software wouldn't be the buggy mess it is today.


Until you get firms willing to take on the risk and remove the human element. That is a hell of a war chest for fighting actionable incidents.


That's 70% of the population living in ghettos and the economy collapsing through lack of people with disposable income with extra steps.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: