Feels too self-congratulatory when he claims to be correct about self driving in the Waymo case. The bar he set is so broad and ambiguous, that probably anything Waymo did, would not qualify as self driving to him. So he think humans are intervening once every 1-2 miles to train the Waymo, we’re not even sure if that is true, I heard from friends that it was 100+ miles but let us say Waymo comes out and says it is 1000 miles.
Then I bet Rodney can just fiddle with goal post and say that 3.26 trillion miles were driven in US in 2024, and having a human intervene 1000 miles would mean 3.26 billion interventions, and that this is clearly not self driving. In fact until Waymo disables Internet on all cars and prices it never needs any intervention ever, Rodney can claim he’s right, even then maybe not stopping exactly where Rodney wanted it to, might be proof that self driving doesn’t work.
Next big thing after deep learning prediction is clearly false. LLM is deep learning, scaled up, we are not in any sense looking past deep learning. Rodney I bet wanted it to be symbolic AI, but that is most likely a dead end, and the bitter lesson actually holds. In fact we have been riding this deep learning wave since Alex-Net 2012. OpenAI talked about scaling since 2016 and during that time the naysayers could be very confident and claim we needed something more, but OpenAI went ahead and proved out the scaling hypothesis and passed the language Turing test. We haven’t needed anything more except scale and reasoning has also turned out to be similar. Just an LLM trained to reason, no symbolic merger, not even a search step it seems like.
Waymo cars can drive. Everything from the (limited) public literature to riding them personally has me totally persuaded that they can drive.
DeepMind RL/MCTS can succeed in fairly open-ended settings like StarCraft and shit.
Brain/DeepMind still knocks hard. They under-invested in LLMs and remain kind of half-hearted around it because they think it’s a dumbass sideshow because it is a dumbass sideshow.
They train on TPU which costs less than chips made of Rhodium like a rapper’s sunglasses, they fixed the structural limits in TF2 and PyTorch via the Jax ecosystem.
If I ever get interested in making some money again Google is the only FAANG outfit I’d look at.
I can tell you as someone that crosses paths almost everyday with a Waymo car, they absolutely due work. I would describe their driving behavior as very safe and overly cautious. I’m far more concerned of humans behind the wheel.
I think that, if it were true that Waymo cars require human intervention every 1-2 miles (thus requiring 1 operator for every, say, 1-2 cars, probably constantly paying attention while the car is in motion), then it would be fair to say that the cars are not really self driving.
However, if the real number is something like an intervention every 20 or 100 miles, and so an operator is likely passively monitoring dozens of cars, and the cars themselves ask for operator assistance rather than the operator actively monitoring them, then I would agree with you that Waymo has really achieved full self driving and his predicitons on the basic viability have turned out wrong.
I have no idea though which is the case. I would be very interested if there are any reliable resources pointing one way or the other.
Waymo is the best driver I’ve ridden with. Yes it has limited coverage. Maybe humans are intervening, but unless someone can prove that humans are intervening multiple times per ride, “self driving” is here, IMO, as of 2024.
In what sense is self-driving “here” if the economics alone prove that it can’t get “here”? It’s not just limited coverage, it’s practically non-existent coverage, both nationally and globally, with no evidence that the system can generalize, profitably, outside the limited areas it’s currently in.
It's covering significant areas of 3 major metros, and the core of one minor, with testing deployments in several other major metros. Considering the top 10 metros are >70% of the US ridehail market, that seems like a long way beyond "non-existent" coverage nationally.
You’re narrowing the market for self-driving to the ridehail market in the top 10 US metros. That’s kinda moving the goal posts, my friend, and completely ignoring the promises made by self-driving companies.
The promise has been that self-driving would replace driving in general because it’d be safer, more economical, etc. The promise has been that you’d be able to send your autonomous car from city to city without a driver present, possibly to pick up your child from school, and bring them back home.
In that sense, yes, Waymo is nonexistent. As the article author points out, lifetime miles for “self-driving” vehicles (70M) accounts for less than 1% of daily driving miles in the US (9B).
Even if we suspend that perspective, and look at the ride-hailing market, in 2018 Uber/Lyft accounted for ~1-2% of miles driven in the top 10 US metros. [1] So, Waymo is a tiny part of a tiny market in a single nation in the world.
Self-driving isn’t “here” in any meaningful sense and it won’t be in the near-term. If it were, we’d see Alphabet pouring much more of its war chest into Waymo to capture what stands to be a multi-trillion dollar market. But they’re not, so clearly they see the same risks that Brooks is highlighting.
There are, optimistically, significantly less than 10k Waymos operating today. There are a bit less than 300M registered vehicles in the US.
If the entire US automotive production were devoted solely to Waymos, it'd still take years to produce enough vehicles to drive any meaningful percentage of the daily road miles in the US.
I think that's a bit of a silly standard to set for hopefully obvious reasons.
> ..is a tiny part of a tiny market in a single nation in the world.
Calculator was a small device that was made in one tiny market in one nation in the world. Now we all got a couple of hardware ones in our desk drawers, and a couple software ones on each smartphone.
If a driving car can perform 'well' (Your Definition May Vary - YDMV) in NY/Chicago/etc. then it can perform equally 'well' in London, Paris, Berlin, Brussels, etc. It's just that EU has stricter rules/regulations while US is more relaxed (thus innovation happens 'there' and not 'here' in the EU).
When 'you guys' (US) nail self-driving, it will only be a matter of time til we (EU) allow it to cross the pond. I see this as a hockey-stick graph. We are still on the eraser/blade phase.
Speaking for one of those metro areas I'm familiar with: maybe in SF city limits specifically (where they still are half the Uber's share), but that's 10% of the population of the Bay Area metro. I'm very much looking forward to the day when I can take a robo cab from where I live near Google to the airport - preferably, much cheaper than today's absurd Uber rates - but today it's just not present in the lives of about 95+% of Bay Area residents.
> preferably, much cheaper than today's absurd Uber rates
I just want to highlight that the only mechanism by which this eventually produces cheaper rates is by removing having to pay a human driver.
I’m not one to forestall technological progress, but there are a huge number of people already living on the margins who will lose one of their few remaining options for income as this expands. AI will inevitably create jobs, but it’s hard to see how it will—in the short term at least—do anything to help the enormous numbers of people who are going to be put out of work.
I’m not saying we should stop the inevitable forward march of technology. But at the same time it’s hard for me to “very much look forward to” the flip side of being able to take robocabs everywhere.
People living on the margins is fundamentally a social problem, and we all know how amenable those are to technical solutions.
Let's say AV development stops tomorrow though. Is continuing to grind workers down under the boot of the gig economy really a preferred solution here or just a way to avoid the difficult political discussion we need to have either way?
I'm not sure how I could have been more clear that I'm not suggesting we stop development on robotaxis or anything related to AI.
All I'm asking is that we take a moment to reflect on the people who won't be winners. Which is going to be a hell of a lot of people. And right now there is absolutely zero plan for what to do when these folks have one of the few remaining opportunities taken away from them.
As awful as the gig economy has been it's better than the "no economy" we're about to drive them to.
This is orthogonal. You're living in a society with no social safety net, one which leaves people with minimal options, and you're arguing for keeping at least those minimal options. Yes, that's better than nothing, but there are much better solutions.
The US is one of the richest countries in the world, with all that wealth going to a few people. "Give everyone else a few scraps too!" is better than having nothing, but redistributing the wealth is better.
Waymo's current operational area in the bay runs from Sunnyvale to fisherman's wharf. I don't know how many people that is, but I'm pretty comfortable calling it a big chunk of the bay.
They don't run to SFO because SF hasn't approved them for airport service.
I just opened the Waymo app and its service certainly doesn't extend to Sunnyvale. I just recently had an experience where I got a Waymo to drive me to a Caltrain station so I can actually get to Sunnyvale.
The public area is SF to Daly City. The employee-only area runs down the rest of the peninsula. Both of them together are the operational area.
Waymo's app only shows the areas accessible to you. Different users can have different accessible areas, though in the Bay area it's currently just the two divisions I'm aware of.
Why would you consider the employee-only area? For that categorization to exist it must mean it's either unreliable for customers or too expensive cause there's too much human drivers on the loop. Either way it would not be considered as an area served by self driving, imo.
I wish! In Palo Alto the cars have been driving around for more than a decade and you still can't hail one. Lately I see them much less often than I used to, actually. I don't think occasional internal-only testing qualifies as "operational".
Where's the economic proof of impossibility? As far as I know Waymo has not published any official numbers, and any third party unit profitability analysis is going to be so sensitive to assumptions about e.g. exact depreciation schedules and utilization percentages that the error bars would inevitably be straddling both sides of the break-even line.
> with no evidence that the system can generalize, profitably, outside the limited areas it’s currently in
That argument doesn't seem horribly compelling given the regular expansions to new areas.
Analyzing Alphabet’s capital allocation decisions gives you all the evidence necessary.
It’s safe to assume that a company’s ownership takes the decisions that they believe will maximize the value of their company. Therefore, we can look at Alphabet’s capital allocation decisions, with respect to Waymo, to see what they think about Waymo’s opportunity.
In the past five years, Alphabet has spent >$100B to buyback their stock; retained ~100B in cash. In 2024, they issued their first dividend to investors and authorized up to $70B more in stock buybacks.
Over that same time period they’ve invested <$5B in Waymo, and committed to investing $5B more over the next few years (no timeline was given).
This tells us that Alphabet believes their money is better spent buying back their stock, paying back their investors, or sitting in the bank, when compared to investing more in Waymo.
Either they believe Waymo’s opportunity is too small (unlikely) to warrant further investment, or when adjusted for the remaining risk/uncertainty (research, technology, product, market, execution, etc) they feel the venture needs to be de-risked further before investing more.
Isn’t there a point of diminishing returns? Let’s assume they hand over $70B to Waymo today. Can Waymo even allocate that?
I view the bottlenecks as two things. Producing the vehicles and establishing new markets.
My understanding of the process with the vehicles is they acquire them then begin a lengthy process of retrofitting them. It seems the only way to improve (read: speed up) this process is to have a tightly integrated manufacturing partner. Does $70B buy that? I’m not sure.
Next, to establish new markets… you need to secure people and real estate. Money is essential but this isn’t a problem you can simply wave money at. You need to get boots on the ground, scout out locations meeting requirements, and begin the fuzzy process of hiring.
I think Alphabet will allocate money as the operation scales. If they can prove viability in a few more markets the levers to open faster production of vehicles will be pulled.
> Alphabet has to buy back their stock because of the massive amount of stock comp they award.
Wait, really? They're a publically traded company; don't they just need to issue new stock (the opposite of buying it back) to employees, who can then choose to sell it in the public market?
That's a very hand wavy argument. How about starting here:
> Mario Herger: Waymo is using around four NVIDIA H100 GPUSs at a unit price of $10,000 per vehicle to cover the necessary computing requirements. The five lidars, 29 cameras, 4 radars – adds another $40,000 - $50,000. This would put the cost of a current Waymo robotaxi at around $150,000
There are definitely some numbers out there that allow us to estimate within some standard deviations how unprofitable Waymo is
(That quote doesn't seem credible. It seems quite unlikely that Waymo would use H100s -- for one, they operate cars that predate the H100 release. And H100s sure as hell don't cost just $10k either.)
You're not even making a handwavy argument. Sure, it might sound like a lot of money, but in terms of unit profitability it could mean anything at all depending on the other parameters. What really matters is a) how long a period that investment is depreciated over; b) what utilization the car gets (ot alternatively, how much revenue it generates); c) how much lower the operating costs are due to not needing to pay a driver.
Like, if the car is depreciated over 5 years, it's basically guaranteed to be unit profitable. While if it has to be depreciated over just a year, it probably isn't.
Do you know what those numbers actually are? I don't.
Here in the product/research sense, which is the hardest bar to cross. Making it cheaper takes time but generally we have reduced cost of everything by orders of magnitude when manufacturing ramps up, and I don't think self driving hardware(sensors etc) would be any different.
It’s not even here in the product/research sense. First, as the author points out, it’s better characterized as operator-assisted semi-autonomous driving in limited locations. That’s great but far from autonomous driving.
Secondly, if we throw a dart on a map: 1) what are the chances Waymo can deploy there, 2) how much money would they have to invest to deploy, and 3) how long would it take?
Waymo is nowhere near a turn-key system where they can setup in any city without investing in the infrastructure underlying Waymo’s system. See [1] which details the amount of manual work and coordination with local officials that Waymo has to do per city.
And that’s just to deploy an operator-assisted semi-autonomous vehicle in the US. EU, China, and India aren’t even on the roadmap yet. These locations will take many more billions worth of investment.
Not to mention Waymo hasn’t even addressed long-haul trucking, an industry ripe for automation that makes cold, calculated, rational business decisions based on economics. Waymo had a brief foray in the industry and then gave up. Because they haven’t solved autonomous driving yet and it’s not even on the horizon.
Whereas we can drop most humans in any of these locations and they’ll mostly figure it out within the week.
Far more than lowering the cost, there are fundamental technological problems that remain unsolved.
> So he think humans are intervening once every 1-2 miles to train the Waymo
Just to make sure we're applying our rubric fairly and universally: Has anyone else been in an Uber where you wished you were able to intervene in the driving a few times, or at least apply RLHF to the driver?
(In other words: Waymo may be imperfect to the point where corrections are sometimes warranted; that doesn't mean they're not already driving at a superhuman level, for most humans. Just because there is no way for remote advisors to provide better decisions for human drivers doesn't mean that human-driven cars would not benefit from that, if it were available.).
To apply this benchmark, you'd have to believe that Waymo is paying operators to improve the quality of the ride, not to make the ride possible at all. That is, you'd have to believe that the fully autonomous car works and gets you to your destination safely and in a timely manner (at the level of a median professional human driver), but Waymo decided that's not good enough and hired operators to improve beyond that. This seems very unlikely to me, and some of the (few) examples I've seen online were about correcting significant failures, such as waiting behind a parked truck indefinitely (as if it were stopped at a red light) or looping around aimlessly in a parking lot.
You'd also have to believe that when you wished to change how your Uber driver drove, you'd actually have improved things rather than worsened them.
Let's suppose Waymo's fully automated stuff has tenfold-fewer fatal collisions than a human. There's no way to avoid the fatal accidents a human causes, and the solution to Waymos getting stuck sometimes is simple. The point is that the Waymo can actually be described as superior to a human driver, and the fact that its errors can be corrected with review is a feature and not a bug - they optimize for those kinds of errors rather than unrecoverable ones.
The Waymo criticisms are absurd to the point of dishonesty. He criticizes a Waymo for... not pulling out fast enough around a truck, or for human criminals vandalizing them? Oh no, once some Waymos did a weird thing where they honked for a while! And a couple times they got stuck over a few million miles! This is an amazingly lame waste of space, and the fact that he does his best to only talk about Tesla instead of Waymo emphasizes how weak his arguments are, particularly in comparison to his earliest predictions. (Obviously only the best self-driving car matters to whether self-driving cars have been created.)
"Nothing ever happens"... until it does, and it seems Brooks's prediction roundups can now be conveniently replaced with a little rock on it with "nothing in AI ever works" written on it without anything of value being lost.
Interesting, that wasn't my takeaway from the article at all!
Direct quote from the article:
> Then I will weave them together to explain how it is still pretty much business as usual, and I mean that in a good way, with steady progress on both the science and engineering of AI.
There are some extremely emotional defences of Waymo on this comment thread. I don't quite understand why? Are they somehow inviolable to constructive criticism in the SV crowd?
Nonsense. If you spoke about self-driving cars a few decades ago you would have understood it to have meant that you could go to a dealer and buy a car that would drive itself, wherever you might be, without your input as a driver.
No-one would have equated the phrase "we'll have self-driving cars" with "some taxis in a few of US cities"
Your objection to him claiming a win on self driving is that you think that we can still define cars as self driving even when humans are operating them? Ok I disagree. If humans are operating them then they simply are not self driving by any sensible definition.
Human interventions are some non zero number in current self driving cars and will likely be that way for a while. Does this mean self driving is a scam and in fact it is just a human driving, and that these are actually ADAS. Maybe in some pedantic sense, you are right but then your definition is not useful, since it lumps cruise control/ lane-keeping ADAS and Waymo’s in the same category. Waymo is genuinely, qualitatively a big improvement above any ADAS/ self driving system that we have seen. I suspect Rodney did not predict even Waymo’s to be possible, but gave himself enough leeway so that he can pedantically argue that Waymo’s are just ADAS and that his prediction was right.
This is not about crashes. By all accounts, the Waymo cars are mostly fully self driving, I beleive even the article author agrees with that. This includes crash avoidance, to the extent that they can.
The remote operation seems to be more about navigational issues and reading the road conditions. Things like accidentally looping, or not knowing how to proceed with an unexpected obstacle. Things that don't really happen to human drivers, even the greenest of new drivers.
You can make exactly the opposite argument as well: You think that we can still define cars as human-driven even when they have self-driving features (e.g. lane keeping). If the car is self-driving in even the smallest way, then they simply are not human-operated by any sensible definition.
Yeah, I think semi-autonomous vehicles are a huge milestone and should be celebrated but the jump from semi-autonomous to fully-autonomous will, I think, feel noticeably different. It will be a moment future generations have trouble imagining a world where drunk or tired driving was ever even an issue.
The future is here, just unevenly distributed. There are already people that don't have that issue, thanks to technology. That technology might be Waymo and not driving in the first place, or the technology might be smartphones and the Internet, which enables Uber/Lyft to operate. Some of them might use older technologies like concrete which enables people to live more densely and not have to drive to get to the nearest liquor establishment.
> That being said, we are not on the verge of replacing and eliminating humans in either white collar jobs or blue collar jobs.
Tell that to someone laid off when replaced by some "AI" system.
> Waymo not autonomous enough
It's not clear how often Waymo cars need remote attention, but it's not every 1-2 miles. Customers would notice the vehicle being stopped and stuck during the wait for customer service. There are many videos of people driving in Waymos for hours without any sign of a situation that required remote intervention.
Tesla and Baidu do use remote drivers.
The situations where Waymo cars get stuck are now somewhat obscure cases. Yesterday, the new mayor of SF had two limos double-parked, and a Waymo got stuck behind that. A Waymo got stuck in a parade that hadn't been listed on Muni's street closure list.
> Flying cars
Probably at the 2028 Olympics in Los Angeles. They won't be cost-effective, but it will be a cool demo.
EHang recently put solid state batteries into their flying car and got 48 minutes of flight time, instead of their previous 25 minutes. Ehang is basically a scaled-up quadrotor drone, with 16 motors and props. EHang been flying for years, but not for very long per recharge. Better batteries will help a lot.
> Tell that to someone laid off when replaced by some "AI" system.
What are some good examples? I am very skeptical of anyone losing their jobs to AI. People are getting laid off for various reasons:
- Companies are replacing American tech jobs with foreigners
- Many companies hired more devs than they need
- companies hired many devs during the pandemic, and don't need them anymore
Some companies may claim they are replacing devs with AI. I take it with a grain of salt. I believe some devs were probably replaced by AI, but not a large amount.
I think there may be a lot more layoffs in the future, but AI will probably account for a very small fraction of those.
> I believe some devs were probably replaced by AI, but not a large amount.
I'm not even sold on the idea that there were any. The media likes to blame AI for the developer layoffs because it makes a much more exciting story than interest rates and arcane tax code changes.
But the fact is that we don't need more than the Section 174 changes and the end of ZIRP to explain what's happened in tech. Federal economic policy was set up to direct massive amounts of investment into software development. Now it's not. That's a real, quantifiable impact that can readily explain what we've seen in a way that the current productivity gains from these tools simply can't.
Now, I'll definitely accept that many companies are attributing their layoffs to AI, but that's for much the same reason that the media laps the story up: it's a far better line to feed investors than that the financial environment has changed for the worse.
But I see so many devs typing here saying how vital AI is to their writing code efficiently and quickly now. If that is true then you need way less devs. Are people just sitting idle at their desks? I do see quite a bit of tech layoffs for sure. Are you saying devs aren't part of the workers being laid off?
>In 2024: At least 95,667 workers at U.S.-based tech companies have lost their jobs so far in the year, according to a Crunchbase News tally.
> Are you saying devs aren't part of the workers being laid off?
No, they are saying that the reason for the layoffs is not AI, it is financial changes making devs too expensive.
> If that is true then you need way less devs.
This does not follow. First of all, companies take a long time to measure dev output, it's not like you can look at a burn down chart over two sprints and decide to fire half the team because it seems they're working twice as fast. So any productivity gains will show up as layoffs only after a long time.
Secondly, dev productivity is very rarely significantly bounded by how long boilerplate takes to write. Having a more efficient way to write boilerplate, even massively more efficient, say 8h down to 1h, will only marginally improve your overall throughput, at least at the senior level: all that does is free you to think more about the complex issues you needed to solve. So if the task would have previously taken you 10 days, of which one day was spent on boilerplate, it may now take you, say, 8-9 days, because you've saved one day on boilerplate, plus some more minor gains here and there. So far from firing 7 out of every 8 devs, the 8h-to-1h boilerplate solution might allow you to fire 1 dev in a team of 10.
> But I see so many devs typing here saying how vital AI is to their writing code efficiently and quickly now. If that is true then you need way less devs.
Sure, in the same sense that editors and compilers mean you need way less devs.
Induced demand means we’ll need more devs than we have right now since every dev can produce more value (anyone using cursor for a longer while should be able to confirm that easily).
The problem is different in the meantime: nobody wants to be paying for training of those new devs. Juniors don’t have the experience to call LLM’s bullshit and seniors don’t get paid to teach them since LLMs replaced interns churning out boilerplate.
I was thinking about this. I think we have an overcorrection right now. People get laid off because of expected performance of AI, not real performance. With copywriting and software development we have three options:
1. leaders notice they were wrong, start to increase human headcount again
2. human work is seen as boutique and premium, used for marketing and market placement
3. we just accept the sub-par quality of AI and go with it (quite likely with copywriting I guess)
I'd like to compare it with cinema and Netflix. There was a time where lost of stuff was mindless shit, but there was still place for A24 and it took the world by storm. What's gonna happen? No one knows.
But anyway, I figure that 90% of "laid off because of AI" is just regular lay-offs with a nice sounding reason. You don't loose anything by saying that and only gain in stakeholder trust.
If you look up business analyst type jobs on JP Morgan website they are still hiring a ton right now.
What you actually notice is how many are being outsourced to other countries outside the US.
I think the main process at work is 1% actual AI automation and a huge amount of return to the office in the US while offshoring the remote work under the cover of "AI".
I imagine there aren't really layoffs, but slowing/stopping of hiring as you get more productivity out of existing devs. I imagine in the future, lots of companies will just let their employee base slowly attrition away.
Yeah, the AgentForce thing is a classic example. Internal leaks say Salesforce is using it as cover for more regular (cost cutting based) layoffs. People who've actually evaluated AgentForce don't think it's ready for prime time. It's more smoke and mirrors (and lots of marketing).
I think what Waymo's achieved is really impressive, and I like the way they've rolled out (carefully), but there's a lot of non evidence based defense of them in this comment thread. YouTube videos of people driving for hours are textbook survivorship bias. (What about all the videos people made but didn't upload because their drive didn't go perfectly?)
Nobody knows how many times operators intervene, because Waymo hasn't said. It's literally impossible to deduce.
Which means I also agree his estimate could also be wildly wrong too.
Solid state batteries. Prototypes work, but high-volume manufacturing doesn't work yet. The major battery manufacturers are all trying to get this to production.
Early versions will probably be expensive.
Maybe a 2x improvement in kwh/kg. Much less risk of fire or thermal runaway.
Charging in < 10 mins.
The one thing I'm curious about with solid state batteries is if there's a path towards incremental improvements in power density like we've seen with lithium batteries?
It would be unfortunate if we get solid state batteries that have the great features that you describe but they're limited to 2x or so power density. Twice the power density opens a lot of doors for technology improvements and innovation but it's still limiting for really cool things like humanoid robotics and large scale battery powered aircraft.
Somebody may come up with a new battery chemistry. There are many people trying. There are constraints other than energy density - charge rate, discharge rate, safety, lifetime, cooling, etc. Lithium-air batteries have an energy density which potentially approaches that of gasoline, but decades of work have not produced anything usable.[1]
There are, of course, small startups promising usable lithium-air batteries Real Soon Now.[2]
There are now a few large flow batteries. Here's one that's 400 megawatt-hours.[1] Round trip efficiency is poor and the installation is bulky, but storage is just tanks of liquid that are constantly recycled.
Good example of everything that can wrong with a prediction market if left unchecked. Don't like that Waymo broke your prediction? Fine just move your goalposts. Like that prediction came true but on the wrong timeframe? Just move the goal posts.
Glad Polymarket (and other related markets) exist so they can put actual goal posts in place with mechanisms that require certain outcomes in order to finalize on a prediction result.
It seems to me that the redefined flying cars for extremely wealthy people did happen? eVTOLs are being sold/delivered to the general public. Certainly still pretty rare, as I've never seen one in real life. I'd love to have one but would probably hate a world where everyone has them.
Not really wanting to have this argument a second time in a week (seriously- just look at my past comments instead of replying here as I said all I care to say https://news.ycombinator.com/item?id=42588699), but he is totally wrong about LLMs just looking up answers in their weights- they can correctly answer questions about totally fabricated new scenarios, such as solving simple physics questions that require tracking the location of objects and reporting where they will likely end up based on modeling the interactions involved. If you absolutely must reply that I am wrong at least try it yourself first in a recent model like GPT-4o and post the prompt you tried.
Kobe Bryant basically commuted by helicopter, when it was convenient. It may have even taken off and landed at his house, but probably not exactly at all of his destinations. Is a “flying car” fundamentally that much different?
I think the difference is that a helicopter is extremely technical to fly requiring complex and expensive training, and the eVTOL is supposed to be extremely simple to fly. Also the eVTOL in principle is really cheap to make if you just consider the materials and construction costs- probably eventually much cheaper than a car.
I was curious so I looked up how much you can buy the cheapest new helicopters for, and they are cheaper than an eVTOL right now- the XE composite is $68k new, and things like that can be ~25k used. I'm shocked one can in principle own a working helicopter for less than the price of a 7 year old Toyota Camry.
Nothing that flies in the air is that safe for its passengers or its surroundings - not without restrictions placed on it and having a maintenance schedule that most people would not be comfortable following.
Most components are safety critical in ways that their failure can lead to an outright crash or feeding the pilot false information leading him to make a fatal mistake. Most cars can be run relatively safely even with major mechanical issues, but something as 'simple' as a broken heater on a pitot tube (or any other component) can lead to a crash.
Then there's an issue of weather - altitude, temperature, humidity, wind speed can create an environment that makes it either impossible, unsafe, or extremely unpleasant - imagine flying into an eddy current that stalls out the aircraft, making your ass drop a few feet.
Flying's a nice hobby, and I have great respect to people who can make a career out of it, but I'd definitely not get into these auto-piloted eVTOLs, nor should people who don't know what they are doing.
Edit: Also unlike helicopters, which can autorotate, and fixed wing aircraft, that can glide, eVTOLs just drop out of the sky.
But I'm sure running costs (aviation fuel), hanger costs, maintenance costs, cost to maintain pilot license are far more expensive, compared to driving a car.
Can you imagine thousands of flying cars flying low over urban areas?
Skill level needed for "driving" would increase by a lot, noise levels would be abysmal, security implications would be severe (be they intentional or mechanical in nature), privacy implications would result in nobody wanting to have windows.
This is all more-or-less true for drones as well, but their weight is comparable to a todler, not to a polar bear. I firmly believe they'll never reach mass usage, but not because they're impossible to make.
I had a friend who used to (still does) fly RC helicopters; that requires quite a bit of skill. Meanwhile, I think anybody can fly a DJI drone. I think that's what will transform "flying" when anybody, not just a highly skilled pilot, can "drive" a flying car (assuming it can be as safe as a normal car... which somehow I doubt)
Yeah, as an NLP researcher I was reading the post with interest until I found that gross oversimplification about LLMs, which has been repeatedly proved wrong. Now I don't trust the comments and predictions on the other fields I know much less about.
I always have a definitional problem with predictions. I mean, it's moot whether a specific prediction is right or wrong as long as it doesn't help us to understand the big picture and the trends.
Take, for example, the prediction about "robots can autonomously navigate all US households". Why all? From the business POV, 80% of the market is "all" in a practical sense, and most people will consider navigation around the home as "solved" if they can do it for the majority of households and with virtually no intervention. Hilarious situations will arise that amuse the folks; video of clumsy robots will flood the internet instead of cats and dogs, but for the business site, it's lucrative enough to produce and sell them en masse.
Another question of interest is how is the trend? What will the approximate cost of such a robot be? How many US households will adopt such a robot by which time, as they have adopted washing machines and dishwashers. Will we see a linear adoption or rather a logistic adoption? These are the more interesting questions than just whether I'm right or wrong.
In reading this I come to wonder if the current advances in "AI" are going to follow the Self Driving Car model. Turns out the 80% is relatively easy to do, but the remaining 20% to get it right is REALLY hard.
I like Rodney Brooks, but I find the way he does these predictions to be very obtuse and subject to a lot of self-congratulatory interpretation. Highlighting something green that is "NET2021" and then saying he was right when something happened or didn't happen, when something related happened in 2024 mean that he predicted it right or wrong, or is everything subject to arbitrary interpretation? Where are the bold predictions? Sounds like a lot of fairly obvious predictions with a lot of wiggle room to determine if right or wrong.
NET2021 means that he predicted that the event would take place on or after 2021, so happening in 2024 satisfies that. Keep in mind these are six-year-old predictions.
Are you wishing that he had tighter confidence intervals?
If the predictions are meant to be bold, then yes. If they're meant to be fairly obvious, then no.
For example, saying that flying cars will be in widespread use NET 2025 is not much of a prediction. I think we can all say that if flying cars will be in widespread use, it will happen No Earlier Than 2025. It could happen in 2060, and that NET 2025 prediction would still be true. He could mark it green in 2026 and say he was right, that, yes, there are no flying cars, and so mark his scorecard another point in the correct column. But is that really a prediction?
A bolder prediction would be, say "Within 1-2 yrs of XX".
So what is Rodney Brooks really trying to predict and say? I'd rather read about what the necessary gating conditions are for something significant and prediction-worthy to occur, or what the intractable problems are that would make something not be possible within a predicted time, rather than reading about him complain about how much overhype and media sensation there is in the AI and robotics (and space) fields. Yes, there is, but that's not much of a prediction or statement either, as it's fairly obvious.
There's also a bit of an undercurrent of complaint in this long article about how the not-as-sexy or hyped work he has done for all those years has gone relatively unrewarded and "undeserving types" are getting all the attention (and money). And as such, many of the predictions and commentary on them read more as rant than as prediction.
Presumably you read the section where Brooks highlights all the forecasts executives were making in 2017? His NET predictions act as a sort of counter-prediction to those types of blind optimistic, overly confident assertions.
In that context, I’d say his predictions are neither obvious nor lacking boldness when we have influential people running around claiming that AGI is here today, AI agents will enter the workforce this year, and we should be prepared for AI-enabled layoffs.
The NET estimation is supposed to be a counter to the irrational exuberance of media and PR. E.g. musk says they'll get humans to Mars in 2020, and the counter is "I don't think that will happen until at least 2030".
> Systems which do require remote operations assistance to get full reliability cut into that economic advantage and have a higher burden on their ROI calculations
Technically true but I'm not convinced it matters that much. The reason autonomation took over in manufacturing was not that they could fire the operator entirely, but that one operator could man 8 machines simultaneously instead of just one.
All that verbiage about robotaxis and not a single mention about China, which by all accounts is well ahead of the US in deploying them out on the road. (With a distinctly mixed track record, it must be said, but still.)
For me this predictions are kind of being aware of how progress can happen based on history, but this will not lead to any breakthrough. I am not in the camp of being skeptic so I still like the hype cycle, they create an environment for people to break the boundaries and sometimes help untested ideas and things to be explored. This might not have happen if there is no hype cycle. I am in the camp of people who are positive as George Bernard Shaw in his 2 quotes:
1. A life spent making mistakes is not only more honorable, but more useful than a life spent doing nothing.
2. The reasonable person adapts themselves to the world: the unreasonable one persists in trying to adapt the world to themself. Therefore all progress depends on the unreasonable person. (Changed man to person as I feel it should be gender neutral)
In hindsight when we look back, everything looks like we anticipated, so predictions are no different some pans out some doesn't. My feeling after reading prediction scorecard is that you need a right balance between risk averse (who are either doubtful or do not have faith things will happen quickly enough) and risk takers (one who is extremely positive) for anything good to happen. Both help humanity to move forward and are necessary part of nature.
It is possible AGI might replace humans in a short term and then new kind of work emerges and humans again find something different. There is always a disruption with new changes and some survive and some can't, even if nothing much happens its worth trying as said in quote 1.
>Individually owned cars can go underground onto a pallet and be whisked underground to another location in a city at more than 100mph.
I'm curious where this idea even came from, not sure who the customer would be, it's a little disappointing he doesn't mention mag-lev trains in a discussion about future rapid transit. I'd much rather ride a smooth mag-lev across town than an underground pallet system.
> The billionaire founders of both Virgin Galactic and Blue Origin had faith in the systems they had created. They both personally flew on the first operational flights of their sub-orbital launch systems. They went way beyond simply talking about how great their technology was, they believed in it, and flew in it.
> Let’s hope this tradition continues. Let’s hope the billionaire founder/CEO of SpaceX will be onboard the first crewed flight of Starship to Mars, and that it happens sooner than I expect. We can all cheer for that.
Quite an unreadable web page, and somehow rationalising there was 'everything before me', and 'everything after me' with regard technology and prediction. Unfortunate understanding of reality really.
If you took a transcript of a conversation with Claude 3.6 Sonnet, and sent it back in time even five years ago (just before the GPT-3 paper was published), nobody would believe it was real. They would say that it was fake, or that it was witchcraft. And whoever believed it was real would instantly acknowledge that the Turing test had been passed. This refusal to update beliefs on new evidence is very tiresome.
Similarly if you could let a person from five years ago have a spoken conversation with ChatGPT Advanced Voice mode or Gemini Live. For me five years ago, the only giveaways that the voice on the other end might not be human would have been its abilities to answer questions instantaneously about almost any subject and to speak many different languages.
The NotebookLM “podcasters” would have been equally convincing to me.
Does it drive anyone else crazy when an author posts 15,000 words (yes, there are that many in this article) when 1,500 would have more than communicated the relevant information? The length of this article is almost comical.
It's long, so I'm skimming a little and... flying cars. If you don't know why we don't have flying cars, you're not a good engineer.
It really doesn't matter what prestigious lab you ran, as that apparently didn't impart the ability to think critically about engineering problems.
[Hint: Flying takes 10x the energy of driving, and the cost/weight/volume of 1 MJ hasn't changed in close to a hundred years. Flying cars require a 10x energy breakthrough.]
The article is responding to claims by CEOs of car companies, industry and business press, and other hype sources that keep predicting flying cars next year or so. It's predicting that, against this hype, it will not come to pass. Not sure why you've worded your comment in such a way as if the article was hyping up flying cars.
Not to mention, since we do have helicopters, the engineering challange of flying cars is almost entirely unrelated to energy costs (at least for the super rich, the equivalent of, say, a Rolls Royce, not of a Toyota). The thing stopping flying cars from existing is that it is extremely hard to make an easy to pilot flying vehicle, given the numerous degrees of freedom (and potential catastrophic failure modes); and the significantly higher impredictability and variance of the medium (air vs road surface).
Plus, the major problem of noise pollution, which gets to extreme levels for somewhat fundamental reasons (you have to diaplace a whole lot of air to fly; which is very close to having to create sound waves).
So, overall, the energy problem is already fixed, we already have point-to-point flying vehicles usable, and occasionally used, in urban areas, helicopters. Making them safe when operated by a very lightly trained pilot, and silent enough to not wake up a neighborhood, are the real issues that will persist even if we had mini fusion reactors.
Not quite. It's about 3x. It also depends on whether you're talking fixed wing or rotary wings.
A modern car might easily have 130 kW or more, and that's what a Cessna 172 has (around 180 hp). (Sure, a plane cruises at the higher end of that, while a car only uses that much to accelerate and cruises at the lower end of the range - still not a factor of 10x.)
As another datapoint, a Diamond DA40 does around 28 miles per gallon (< 9 litres per 100 km) at 60% power cruise.
The article is not optimistic on flying cars. The prediction is that an expensive flying car could be purchased no earlier than 2036, and notes a strong possibility that it won’t even happen by 2050. Plus states that minor success (aka 0.1% of car sales are flying cars) isn’t going to happen in his lifetime.
The author also expands on this:
> Don’t hold your breath. They are not here. They are not coming soon.
> Nothing has changed. Billions of dollars have been spent on this fantasy of personal flying cars. It is just that, a fantasy, largely fueled by spending by billionaires.
It’s worth actually reading the article before trashing someone’s career and engineering skills!
Engineering is about focusing on what matters. There's no point in talking about flying cars: they will exist when portable fusion exists, so just talk about that.
>>> [self driving cars are rmeote controlled] in all cases so far deployed, humans monitoring those cars from a remote location, and occasionally sending control inputs to the cars.
Wait, What now?
I have never heard this, but from the founder of CSAIL I am going to take it as a statement of fact and proof that basically every AI company is flat out lying.
I mean the difference between remote piloting a drone that has some autonomous flying features (which they do to handle lag etc) and remote driving a car is … semantics?
But yeah it’s just moving jobs from one location to another.
Note that even the examples he gives are related to things like an operator telling the car to overtake a stopped truck instead of waiting for it to start again. So occasional high level decisions, not minute-to-minute or even second-to-second interactions like you have when flying a drone.
This is more like telling your units to go somewhere in a video game, and they mostly do it right, but occasionally you have to take a look and help them because they got stuck in a particularly narrow corridor or something.
I don't know the motivation behind making robotics and AI predictions, as these things have been done to death since the 70s, but I know people who bet for high inflation made a killing in financial futures.
> It distorts where VC money goes, always to something that promises impossibly large payoffs–it seems it is better to have an untested idea that would have an enormous payoff than a tested idea which can get to a sustainable business
But this is the whole point of VC investing. It is not normal distribution investing.
what a weird writer, lots of interesting things to talk about but this very long essay continued to circle back to being author-self-obsessed with their own prowess and drawing out huge expositions and bullet lists on how well they are at predicting things. Call it self-referential-appeal-to-authority.
Another perspective is that it is a person who takes great care/is very thorough, to examine and re-evaluate his reasonings, and makes an effort to explain the logic in his reasoning, which can be helpful if you are trying to figure out if you agree or disagree.
Feels too self-congratulatory when he claims to be correct about self driving in the Waymo case. The bar he set is so broad and ambiguous, that probably anything Waymo did, would not qualify as self driving to him. So he think humans are intervening once every 1-2 miles to train the Waymo, we’re not even sure if that is true, I heard from friends that it was 100+ miles but let us say Waymo comes out and says it is 1000 miles.
Then I bet Rodney can just fiddle with goal post and say that 3.26 trillion miles were driven in US in 2024, and having a human intervene 1000 miles would mean 3.26 billion interventions, and that this is clearly not self driving. In fact until Waymo disables Internet on all cars and prices it never needs any intervention ever, Rodney can claim he’s right, even then maybe not stopping exactly where Rodney wanted it to, might be proof that self driving doesn’t work.
Next big thing after deep learning prediction is clearly false. LLM is deep learning, scaled up, we are not in any sense looking past deep learning. Rodney I bet wanted it to be symbolic AI, but that is most likely a dead end, and the bitter lesson actually holds. In fact we have been riding this deep learning wave since Alex-Net 2012. OpenAI talked about scaling since 2016 and during that time the naysayers could be very confident and claim we needed something more, but OpenAI went ahead and proved out the scaling hypothesis and passed the language Turing test. We haven’t needed anything more except scale and reasoning has also turned out to be similar. Just an LLM trained to reason, no symbolic merger, not even a search step it seems like.
Waymo cars can drive. Everything from the (limited) public literature to riding them personally has me totally persuaded that they can drive.
DeepMind RL/MCTS can succeed in fairly open-ended settings like StarCraft and shit.
Brain/DeepMind still knocks hard. They under-invested in LLMs and remain kind of half-hearted around it because they think it’s a dumbass sideshow because it is a dumbass sideshow.
They train on TPU which costs less than chips made of Rhodium like a rapper’s sunglasses, they fixed the structural limits in TF2 and PyTorch via the Jax ecosystem.
If I ever get interested in making some money again Google is the only FAANG outfit I’d look at.
I can tell you as someone that crosses paths almost everyday with a Waymo car, they absolutely due work. I would describe their driving behavior as very safe and overly cautious. I’m far more concerned of humans behind the wheel.
I especially love how they can go fast when it’s safe and slow when the error bars go up even a little.
It’s like being in the back seat of Nikki Lauda’s car.
As shown here:
https://www.youtube.com/watch?v=hVZ8NyV4pXU
Perfect clip out of all of YouTube.
I think that, if it were true that Waymo cars require human intervention every 1-2 miles (thus requiring 1 operator for every, say, 1-2 cars, probably constantly paying attention while the car is in motion), then it would be fair to say that the cars are not really self driving.
However, if the real number is something like an intervention every 20 or 100 miles, and so an operator is likely passively monitoring dozens of cars, and the cars themselves ask for operator assistance rather than the operator actively monitoring them, then I would agree with you that Waymo has really achieved full self driving and his predicitons on the basic viability have turned out wrong.
I have no idea though which is the case. I would be very interested if there are any reliable resources pointing one way or the other.
Waymo is the best driver I’ve ridden with. Yes it has limited coverage. Maybe humans are intervening, but unless someone can prove that humans are intervening multiple times per ride, “self driving” is here, IMO, as of 2024.
In what sense is self-driving “here” if the economics alone prove that it can’t get “here”? It’s not just limited coverage, it’s practically non-existent coverage, both nationally and globally, with no evidence that the system can generalize, profitably, outside the limited areas it’s currently in.
It's covering significant areas of 3 major metros, and the core of one minor, with testing deployments in several other major metros. Considering the top 10 metros are >70% of the US ridehail market, that seems like a long way beyond "non-existent" coverage nationally.
You’re narrowing the market for self-driving to the ridehail market in the top 10 US metros. That’s kinda moving the goal posts, my friend, and completely ignoring the promises made by self-driving companies.
The promise has been that self-driving would replace driving in general because it’d be safer, more economical, etc. The promise has been that you’d be able to send your autonomous car from city to city without a driver present, possibly to pick up your child from school, and bring them back home.
In that sense, yes, Waymo is nonexistent. As the article author points out, lifetime miles for “self-driving” vehicles (70M) accounts for less than 1% of daily driving miles in the US (9B).
Even if we suspend that perspective, and look at the ride-hailing market, in 2018 Uber/Lyft accounted for ~1-2% of miles driven in the top 10 US metros. [1] So, Waymo is a tiny part of a tiny market in a single nation in the world.
Self-driving isn’t “here” in any meaningful sense and it won’t be in the near-term. If it were, we’d see Alphabet pouring much more of its war chest into Waymo to capture what stands to be a multi-trillion dollar market. But they’re not, so clearly they see the same risks that Brooks is highlighting.
[1]: https://drive.google.com/file/d/1FIUskVkj9lsAnWJQ6kLhAhNoVLj...
There are, optimistically, significantly less than 10k Waymos operating today. There are a bit less than 300M registered vehicles in the US. If the entire US automotive production were devoted solely to Waymos, it'd still take years to produce enough vehicles to drive any meaningful percentage of the daily road miles in the US.
I think that's a bit of a silly standard to set for hopefully obvious reasons.
> ..is a tiny part of a tiny market in a single nation in the world.
Calculator was a small device that was made in one tiny market in one nation in the world. Now we all got a couple of hardware ones in our desk drawers, and a couple software ones on each smartphone.
If a driving car can perform 'well' (Your Definition May Vary - YDMV) in NY/Chicago/etc. then it can perform equally 'well' in London, Paris, Berlin, Brussels, etc. It's just that EU has stricter rules/regulations while US is more relaxed (thus innovation happens 'there' and not 'here' in the EU).
When 'you guys' (US) nail self-driving, it will only be a matter of time til we (EU) allow it to cross the pond. I see this as a hockey-stick graph. We are still on the eraser/blade phase.
Speaking for one of those metro areas I'm familiar with: maybe in SF city limits specifically (where they still are half the Uber's share), but that's 10% of the population of the Bay Area metro. I'm very much looking forward to the day when I can take a robo cab from where I live near Google to the airport - preferably, much cheaper than today's absurd Uber rates - but today it's just not present in the lives of about 95+% of Bay Area residents.
> preferably, much cheaper than today's absurd Uber rates
I just want to highlight that the only mechanism by which this eventually produces cheaper rates is by removing having to pay a human driver.
I’m not one to forestall technological progress, but there are a huge number of people already living on the margins who will lose one of their few remaining options for income as this expands. AI will inevitably create jobs, but it’s hard to see how it will—in the short term at least—do anything to help the enormous numbers of people who are going to be put out of work.
I’m not saying we should stop the inevitable forward march of technology. But at the same time it’s hard for me to “very much look forward to” the flip side of being able to take robocabs everywhere.
People living on the margins is fundamentally a social problem, and we all know how amenable those are to technical solutions.
Let's say AV development stops tomorrow though. Is continuing to grind workers down under the boot of the gig economy really a preferred solution here or just a way to avoid the difficult political discussion we need to have either way?
I'm not sure how I could have been more clear that I'm not suggesting we stop development on robotaxis or anything related to AI.
All I'm asking is that we take a moment to reflect on the people who won't be winners. Which is going to be a hell of a lot of people. And right now there is absolutely zero plan for what to do when these folks have one of the few remaining opportunities taken away from them.
As awful as the gig economy has been it's better than the "no economy" we're about to drive them to.
This is orthogonal. You're living in a society with no social safety net, one which leaves people with minimal options, and you're arguing for keeping at least those minimal options. Yes, that's better than nothing, but there are much better solutions.
The US is one of the richest countries in the world, with all that wealth going to a few people. "Give everyone else a few scraps too!" is better than having nothing, but redistributing the wealth is better.
Do you ever drive yourself or would you feel guilty not paying a driver?
> preferably, much cheaper than today's absurd Uber rates
You haven’t paid attention to how VC companies work.
Waymo's current operational area in the bay runs from Sunnyvale to fisherman's wharf. I don't know how many people that is, but I'm pretty comfortable calling it a big chunk of the bay.
They don't run to SFO because SF hasn't approved them for airport service.
I just opened the Waymo app and its service certainly doesn't extend to Sunnyvale. I just recently had an experience where I got a Waymo to drive me to a Caltrain station so I can actually get to Sunnyvale.
The public area is SF to Daly City. The employee-only area runs down the rest of the peninsula. Both of them together are the operational area.
Waymo's app only shows the areas accessible to you. Different users can have different accessible areas, though in the Bay area it's currently just the two divisions I'm aware of.
Why would you consider the employee-only area? For that categorization to exist it must mean it's either unreliable for customers or too expensive cause there's too much human drivers on the loop. Either way it would not be considered as an area served by self driving, imo.
I wish! In Palo Alto the cars have been driving around for more than a decade and you still can't hail one. Lately I see them much less often than I used to, actually. I don't think occasional internal-only testing qualifies as "operational".
Where's the economic proof of impossibility? As far as I know Waymo has not published any official numbers, and any third party unit profitability analysis is going to be so sensitive to assumptions about e.g. exact depreciation schedules and utilization percentages that the error bars would inevitably be straddling both sides of the break-even line.
> with no evidence that the system can generalize, profitably, outside the limited areas it’s currently in
That argument doesn't seem horribly compelling given the regular expansions to new areas.
Analyzing Alphabet’s capital allocation decisions gives you all the evidence necessary.
It’s safe to assume that a company’s ownership takes the decisions that they believe will maximize the value of their company. Therefore, we can look at Alphabet’s capital allocation decisions, with respect to Waymo, to see what they think about Waymo’s opportunity.
In the past five years, Alphabet has spent >$100B to buyback their stock; retained ~100B in cash. In 2024, they issued their first dividend to investors and authorized up to $70B more in stock buybacks.
Over that same time period they’ve invested <$5B in Waymo, and committed to investing $5B more over the next few years (no timeline was given).
This tells us that Alphabet believes their money is better spent buying back their stock, paying back their investors, or sitting in the bank, when compared to investing more in Waymo.
Either they believe Waymo’s opportunity is too small (unlikely) to warrant further investment, or when adjusted for the remaining risk/uncertainty (research, technology, product, market, execution, etc) they feel the venture needs to be de-risked further before investing more.
Isn’t there a point of diminishing returns? Let’s assume they hand over $70B to Waymo today. Can Waymo even allocate that?
I view the bottlenecks as two things. Producing the vehicles and establishing new markets.
My understanding of the process with the vehicles is they acquire them then begin a lengthy process of retrofitting them. It seems the only way to improve (read: speed up) this process is to have a tightly integrated manufacturing partner. Does $70B buy that? I’m not sure.
Next, to establish new markets… you need to secure people and real estate. Money is essential but this isn’t a problem you can simply wave money at. You need to get boots on the ground, scout out locations meeting requirements, and begin the fuzzy process of hiring.
I think Alphabet will allocate money as the operation scales. If they can prove viability in a few more markets the levers to open faster production of vehicles will be pulled.
To be clear, buying back stock is one of the ways they can invest in Waymo (and other business units).
Since Alphabet buybacks mostly just offset employee stock compensation, the main thing they are getting for this money is employees.
>believes their money is better spent buying back their stock,
Alphabet has to buy back their stock because of the massive amount of stock comp they award.
> Alphabet has to buy back their stock because of the massive amount of stock comp they award.
Wait, really? They're a publically traded company; don't they just need to issue new stock (the opposite of buying it back) to employees, who can then choose to sell it in the public market?
It's much better comp if the value of the stock goes up.
That's a very hand wavy argument. How about starting here:
> Mario Herger: Waymo is using around four NVIDIA H100 GPUSs at a unit price of $10,000 per vehicle to cover the necessary computing requirements. The five lidars, 29 cameras, 4 radars – adds another $40,000 - $50,000. This would put the cost of a current Waymo robotaxi at around $150,000
There are definitely some numbers out there that allow us to estimate within some standard deviations how unprofitable Waymo is
(That quote doesn't seem credible. It seems quite unlikely that Waymo would use H100s -- for one, they operate cars that predate the H100 release. And H100s sure as hell don't cost just $10k either.)
You're not even making a handwavy argument. Sure, it might sound like a lot of money, but in terms of unit profitability it could mean anything at all depending on the other parameters. What really matters is a) how long a period that investment is depreciated over; b) what utilization the car gets (ot alternatively, how much revenue it generates); c) how much lower the operating costs are due to not needing to pay a driver.
Like, if the car is depreciated over 5 years, it's basically guaranteed to be unit profitable. While if it has to be depreciated over just a year, it probably isn't.
Do you know what those numbers actually are? I don't.
Here in the product/research sense, which is the hardest bar to cross. Making it cheaper takes time but generally we have reduced cost of everything by orders of magnitude when manufacturing ramps up, and I don't think self driving hardware(sensors etc) would be any different.
It’s not even here in the product/research sense. First, as the author points out, it’s better characterized as operator-assisted semi-autonomous driving in limited locations. That’s great but far from autonomous driving.
Secondly, if we throw a dart on a map: 1) what are the chances Waymo can deploy there, 2) how much money would they have to invest to deploy, and 3) how long would it take?
Waymo is nowhere near a turn-key system where they can setup in any city without investing in the infrastructure underlying Waymo’s system. See [1] which details the amount of manual work and coordination with local officials that Waymo has to do per city.
And that’s just to deploy an operator-assisted semi-autonomous vehicle in the US. EU, China, and India aren’t even on the roadmap yet. These locations will take many more billions worth of investment.
Not to mention Waymo hasn’t even addressed long-haul trucking, an industry ripe for automation that makes cold, calculated, rational business decisions based on economics. Waymo had a brief foray in the industry and then gave up. Because they haven’t solved autonomous driving yet and it’s not even on the horizon.
Whereas we can drop most humans in any of these locations and they’ll mostly figure it out within the week.
Far more than lowering the cost, there are fundamental technological problems that remain unsolved.
[1]: https://waymo.com/blog/2020/09/the-waymo-driver-handbook-map...
Does Wayne operate in heavy rain and any kind of snow or ice conditions?
> So he think humans are intervening once every 1-2 miles to train the Waymo
Just to make sure we're applying our rubric fairly and universally: Has anyone else been in an Uber where you wished you were able to intervene in the driving a few times, or at least apply RLHF to the driver?
(In other words: Waymo may be imperfect to the point where corrections are sometimes warranted; that doesn't mean they're not already driving at a superhuman level, for most humans. Just because there is no way for remote advisors to provide better decisions for human drivers doesn't mean that human-driven cars would not benefit from that, if it were available.).
To apply this benchmark, you'd have to believe that Waymo is paying operators to improve the quality of the ride, not to make the ride possible at all. That is, you'd have to believe that the fully autonomous car works and gets you to your destination safely and in a timely manner (at the level of a median professional human driver), but Waymo decided that's not good enough and hired operators to improve beyond that. This seems very unlikely to me, and some of the (few) examples I've seen online were about correcting significant failures, such as waiting behind a parked truck indefinitely (as if it were stopped at a red light) or looping around aimlessly in a parking lot.
You'd also have to believe that when you wished to change how your Uber driver drove, you'd actually have improved things rather than worsened them.
Let's suppose Waymo's fully automated stuff has tenfold-fewer fatal collisions than a human. There's no way to avoid the fatal accidents a human causes, and the solution to Waymos getting stuck sometimes is simple. The point is that the Waymo can actually be described as superior to a human driver, and the fact that its errors can be corrected with review is a feature and not a bug - they optimize for those kinds of errors rather than unrecoverable ones.
The Waymo criticisms are absurd to the point of dishonesty. He criticizes a Waymo for... not pulling out fast enough around a truck, or for human criminals vandalizing them? Oh no, once some Waymos did a weird thing where they honked for a while! And a couple times they got stuck over a few million miles! This is an amazingly lame waste of space, and the fact that he does his best to only talk about Tesla instead of Waymo emphasizes how weak his arguments are, particularly in comparison to his earliest predictions. (Obviously only the best self-driving car matters to whether self-driving cars have been created.)
"Nothing ever happens"... until it does, and it seems Brooks's prediction roundups can now be conveniently replaced with a little rock on it with "nothing in AI ever works" written on it without anything of value being lost.
Interesting, that wasn't my takeaway from the article at all!
Direct quote from the article:
> Then I will weave them together to explain how it is still pretty much business as usual, and I mean that in a good way, with steady progress on both the science and engineering of AI.
There are some extremely emotional defences of Waymo on this comment thread. I don't quite understand why? Are they somehow inviolable to constructive criticism in the SV crowd?
Nonsense. If you spoke about self-driving cars a few decades ago you would have understood it to have meant that you could go to a dealer and buy a car that would drive itself, wherever you might be, without your input as a driver.
No-one would have equated the phrase "we'll have self-driving cars" with "some taxis in a few of US cities"
Your objection to him claiming a win on self driving is that you think that we can still define cars as self driving even when humans are operating them? Ok I disagree. If humans are operating them then they simply are not self driving by any sensible definition.
Human interventions are some non zero number in current self driving cars and will likely be that way for a while. Does this mean self driving is a scam and in fact it is just a human driving, and that these are actually ADAS. Maybe in some pedantic sense, you are right but then your definition is not useful, since it lumps cruise control/ lane-keeping ADAS and Waymo’s in the same category. Waymo is genuinely, qualitatively a big improvement above any ADAS/ self driving system that we have seen. I suspect Rodney did not predict even Waymo’s to be possible, but gave himself enough leeway so that he can pedantically argue that Waymo’s are just ADAS and that his prediction was right.
No one said scam (although in the case of Tesla it absolutely is). It's just not a solved problem yet.
> It's just not a solved problem yet.
Human driving isn't a solved problem either; the difference is that when a human driver needs intervention it just crashes.
This is not about crashes. By all accounts, the Waymo cars are mostly fully self driving, I beleive even the article author agrees with that. This includes crash avoidance, to the extent that they can.
The remote operation seems to be more about navigational issues and reading the road conditions. Things like accidentally looping, or not knowing how to proceed with an unexpected obstacle. Things that don't really happen to human drivers, even the greenest of new drivers.
You can make exactly the opposite argument as well: You think that we can still define cars as human-driven even when they have self-driving features (e.g. lane keeping). If the car is self-driving in even the smallest way, then they simply are not human-operated by any sensible definition.
Yeah, I think semi-autonomous vehicles are a huge milestone and should be celebrated but the jump from semi-autonomous to fully-autonomous will, I think, feel noticeably different. It will be a moment future generations have trouble imagining a world where drunk or tired driving was ever even an issue.
The future is here, just unevenly distributed. There are already people that don't have that issue, thanks to technology. That technology might be Waymo and not driving in the first place, or the technology might be smartphones and the Internet, which enables Uber/Lyft to operate. Some of them might use older technologies like concrete which enables people to live more densely and not have to drive to get to the nearest liquor establishment.
> That being said, we are not on the verge of replacing and eliminating humans in either white collar jobs or blue collar jobs.
Tell that to someone laid off when replaced by some "AI" system.
> Waymo not autonomous enough
It's not clear how often Waymo cars need remote attention, but it's not every 1-2 miles. Customers would notice the vehicle being stopped and stuck during the wait for customer service. There are many videos of people driving in Waymos for hours without any sign of a situation that required remote intervention.
Tesla and Baidu do use remote drivers.
The situations where Waymo cars get stuck are now somewhat obscure cases. Yesterday, the new mayor of SF had two limos double-parked, and a Waymo got stuck behind that. A Waymo got stuck in a parade that hadn't been listed on Muni's street closure list.
> Flying cars
Probably at the 2028 Olympics in Los Angeles. They won't be cost-effective, but it will be a cool demo. EHang recently put solid state batteries into their flying car and got 48 minutes of flight time, instead of their previous 25 minutes. Ehang is basically a scaled-up quadrotor drone, with 16 motors and props. EHang been flying for years, but not for very long per recharge. Better batteries will help a lot.
[1] https://aerospaceamerica.aiaa.org/electric-air-taxi-flights-...
> Tell that to someone laid off when replaced by some "AI" system. What are some good examples? I am very skeptical of anyone losing their jobs to AI. People are getting laid off for various reasons: - Companies are replacing American tech jobs with foreigners - Many companies hired more devs than they need - companies hired many devs during the pandemic, and don't need them anymore
Some companies may claim they are replacing devs with AI. I take it with a grain of salt. I believe some devs were probably replaced by AI, but not a large amount.
I think there may be a lot more layoffs in the future, but AI will probably account for a very small fraction of those.
> I believe some devs were probably replaced by AI, but not a large amount.
I'm not even sold on the idea that there were any. The media likes to blame AI for the developer layoffs because it makes a much more exciting story than interest rates and arcane tax code changes.
But the fact is that we don't need more than the Section 174 changes and the end of ZIRP to explain what's happened in tech. Federal economic policy was set up to direct massive amounts of investment into software development. Now it's not. That's a real, quantifiable impact that can readily explain what we've seen in a way that the current productivity gains from these tools simply can't.
Now, I'll definitely accept that many companies are attributing their layoffs to AI, but that's for much the same reason that the media laps the story up: it's a far better line to feed investors than that the financial environment has changed for the worse.
But I see so many devs typing here saying how vital AI is to their writing code efficiently and quickly now. If that is true then you need way less devs. Are people just sitting idle at their desks? I do see quite a bit of tech layoffs for sure. Are you saying devs aren't part of the workers being laid off?
>In 2024: At least 95,667 workers at U.S.-based tech companies have lost their jobs so far in the year, according to a Crunchbase News tally.
> Are you saying devs aren't part of the workers being laid off?
No, they are saying that the reason for the layoffs is not AI, it is financial changes making devs too expensive.
> If that is true then you need way less devs.
This does not follow. First of all, companies take a long time to measure dev output, it's not like you can look at a burn down chart over two sprints and decide to fire half the team because it seems they're working twice as fast. So any productivity gains will show up as layoffs only after a long time.
Secondly, dev productivity is very rarely significantly bounded by how long boilerplate takes to write. Having a more efficient way to write boilerplate, even massively more efficient, say 8h down to 1h, will only marginally improve your overall throughput, at least at the senior level: all that does is free you to think more about the complex issues you needed to solve. So if the task would have previously taken you 10 days, of which one day was spent on boilerplate, it may now take you, say, 8-9 days, because you've saved one day on boilerplate, plus some more minor gains here and there. So far from firing 7 out of every 8 devs, the 8h-to-1h boilerplate solution might allow you to fire 1 dev in a team of 10.
> But I see so many devs typing here saying how vital AI is to their writing code efficiently and quickly now. If that is true then you need way less devs.
Sure, in the same sense that editors and compilers mean you need way less devs.
Induced demand means we’ll need more devs than we have right now since every dev can produce more value (anyone using cursor for a longer while should be able to confirm that easily).
The problem is different in the meantime: nobody wants to be paying for training of those new devs. Juniors don’t have the experience to call LLM’s bullshit and seniors don’t get paid to teach them since LLMs replaced interns churning out boilerplate.
A lot of devs are hacks. If an AI can do your job you had no value as a software developer.
It's pretty much impossible to get work as a copywriter now
I was thinking about this. I think we have an overcorrection right now. People get laid off because of expected performance of AI, not real performance. With copywriting and software development we have three options:
1. leaders notice they were wrong, start to increase human headcount again 2. human work is seen as boutique and premium, used for marketing and market placement 3. we just accept the sub-par quality of AI and go with it (quite likely with copywriting I guess)
I'd like to compare it with cinema and Netflix. There was a time where lost of stuff was mindless shit, but there was still place for A24 and it took the world by storm. What's gonna happen? No one knows.
But anyway, I figure that 90% of "laid off because of AI" is just regular lay-offs with a nice sounding reason. You don't loose anything by saying that and only gain in stakeholder trust.
90% might even be too low.
If you look up business analyst type jobs on JP Morgan website they are still hiring a ton right now.
What you actually notice is how many are being outsourced to other countries outside the US.
I think the main process at work is 1% actual AI automation and a huge amount of return to the office in the US while offshoring the remote work under the cover of "AI".
I imagine there aren't really layoffs, but slowing/stopping of hiring as you get more productivity out of existing devs. I imagine in the future, lots of companies will just let their employee base slowly attrition away.
Yeah, the AgentForce thing is a classic example. Internal leaks say Salesforce is using it as cover for more regular (cost cutting based) layoffs. People who've actually evaluated AgentForce don't think it's ready for prime time. It's more smoke and mirrors (and lots of marketing).
I think what Waymo's achieved is really impressive, and I like the way they've rolled out (carefully), but there's a lot of non evidence based defense of them in this comment thread. YouTube videos of people driving for hours are textbook survivorship bias. (What about all the videos people made but didn't upload because their drive didn't go perfectly?)
Nobody knows how many times operators intervene, because Waymo hasn't said. It's literally impossible to deduce.
Which means I also agree his estimate could also be wildly wrong too.
What is the silver bullet for battery tech?
Solid state batteries. Prototypes work, but high-volume manufacturing doesn't work yet. The major battery manufacturers are all trying to get this to production. Early versions will probably be expensive.
Maybe a 2x improvement in kwh/kg. Much less risk of fire or thermal runaway. Charging in < 10 mins.
The one thing I'm curious about with solid state batteries is if there's a path towards incremental improvements in power density like we've seen with lithium batteries?
It would be unfortunate if we get solid state batteries that have the great features that you describe but they're limited to 2x or so power density. Twice the power density opens a lot of doors for technology improvements and innovation but it's still limiting for really cool things like humanoid robotics and large scale battery powered aircraft.
Somebody may come up with a new battery chemistry. There are many people trying. There are constraints other than energy density - charge rate, discharge rate, safety, lifetime, cooling, etc. Lithium-air batteries have an energy density which potentially approaches that of gasoline, but decades of work have not produced anything usable.[1]
There are, of course, small startups promising usable lithium-air batteries Real Soon Now.[2]
[1] https://en.wikipedia.org/wiki/Lithium%E2%80%93air_battery
[2] https://airenergyllc.com/
I think there are ~3 major battery improvements to watch out for.
1. Solid state batteries. Likely to be expensive, but promise better energy density.
2. Some really good grid storage battery. Likely made with iron or molten salt or something like that. Dirt cheap, but horrible energy density.
3. Continued Lithium ion battery improvements, e.g. cheaper, more durable etc.
There are now a few large flow batteries. Here's one that's 400 megawatt-hours.[1] Round trip efficiency is poor and the installation is bulky, but storage is just tanks of liquid that are constantly recycled.
[1] https://newatlas.com/energy/worlds-largest-flow-battery-grid...
My money is on saltwater batteries. You can make them really cheaply. Flow batteries are still too complicated IMO.
Good example of everything that can wrong with a prediction market if left unchecked. Don't like that Waymo broke your prediction? Fine just move your goalposts. Like that prediction came true but on the wrong timeframe? Just move the goal posts.
Glad Polymarket (and other related markets) exist so they can put actual goal posts in place with mechanisms that require certain outcomes in order to finalize on a prediction result.
Well said, shows even the most accomplished humans have the same biases as the rest of us when not held accountable
It seems to me that the redefined flying cars for extremely wealthy people did happen? eVTOLs are being sold/delivered to the general public. Certainly still pretty rare, as I've never seen one in real life. I'd love to have one but would probably hate a world where everyone has them.
Not really wanting to have this argument a second time in a week (seriously- just look at my past comments instead of replying here as I said all I care to say https://news.ycombinator.com/item?id=42588699), but he is totally wrong about LLMs just looking up answers in their weights- they can correctly answer questions about totally fabricated new scenarios, such as solving simple physics questions that require tracking the location of objects and reporting where they will likely end up based on modeling the interactions involved. If you absolutely must reply that I am wrong at least try it yourself first in a recent model like GPT-4o and post the prompt you tried.
Kobe Bryant basically commuted by helicopter, when it was convenient. It may have even taken off and landed at his house, but probably not exactly at all of his destinations. Is a “flying car” fundamentally that much different?
I think the difference is that a helicopter is extremely technical to fly requiring complex and expensive training, and the eVTOL is supposed to be extremely simple to fly. Also the eVTOL in principle is really cheap to make if you just consider the materials and construction costs- probably eventually much cheaper than a car.
I was curious so I looked up how much you can buy the cheapest new helicopters for, and they are cheaper than an eVTOL right now- the XE composite is $68k new, and things like that can be ~25k used. I'm shocked one can in principle own a working helicopter for less than the price of a 7 year old Toyota Camry.
Nothing that flies in the air is that safe for its passengers or its surroundings - not without restrictions placed on it and having a maintenance schedule that most people would not be comfortable following.
Most components are safety critical in ways that their failure can lead to an outright crash or feeding the pilot false information leading him to make a fatal mistake. Most cars can be run relatively safely even with major mechanical issues, but something as 'simple' as a broken heater on a pitot tube (or any other component) can lead to a crash.
Then there's an issue of weather - altitude, temperature, humidity, wind speed can create an environment that makes it either impossible, unsafe, or extremely unpleasant - imagine flying into an eddy current that stalls out the aircraft, making your ass drop a few feet.
Flying's a nice hobby, and I have great respect to people who can make a career out of it, but I'd definitely not get into these auto-piloted eVTOLs, nor should people who don't know what they are doing.
Edit: Also unlike helicopters, which can autorotate, and fixed wing aircraft, that can glide, eVTOLs just drop out of the sky.
But I'm sure running costs (aviation fuel), hanger costs, maintenance costs, cost to maintain pilot license are far more expensive, compared to driving a car.
Can you imagine thousands of flying cars flying low over urban areas?
Skill level needed for "driving" would increase by a lot, noise levels would be abysmal, security implications would be severe (be they intentional or mechanical in nature), privacy implications would result in nobody wanting to have windows.
This is all more-or-less true for drones as well, but their weight is comparable to a todler, not to a polar bear. I firmly believe they'll never reach mass usage, but not because they're impossible to make.
I had a friend who used to (still does) fly RC helicopters; that requires quite a bit of skill. Meanwhile, I think anybody can fly a DJI drone. I think that's what will transform "flying" when anybody, not just a highly skilled pilot, can "drive" a flying car (assuming it can be as safe as a normal car... which somehow I doubt)
Yeah, as an NLP researcher I was reading the post with interest until I found that gross oversimplification about LLMs, which has been repeatedly proved wrong. Now I don't trust the comments and predictions on the other fields I know much less about.
I always have a definitional problem with predictions. I mean, it's moot whether a specific prediction is right or wrong as long as it doesn't help us to understand the big picture and the trends.
Take, for example, the prediction about "robots can autonomously navigate all US households". Why all? From the business POV, 80% of the market is "all" in a practical sense, and most people will consider navigation around the home as "solved" if they can do it for the majority of households and with virtually no intervention. Hilarious situations will arise that amuse the folks; video of clumsy robots will flood the internet instead of cats and dogs, but for the business site, it's lucrative enough to produce and sell them en masse. Another question of interest is how is the trend? What will the approximate cost of such a robot be? How many US households will adopt such a robot by which time, as they have adopted washing machines and dishwashers. Will we see a linear adoption or rather a logistic adoption? These are the more interesting questions than just whether I'm right or wrong.
In reading this I come to wonder if the current advances in "AI" are going to follow the Self Driving Car model. Turns out the 80% is relatively easy to do, but the remaining 20% to get it right is REALLY hard.
I like Rodney Brooks, but I find the way he does these predictions to be very obtuse and subject to a lot of self-congratulatory interpretation. Highlighting something green that is "NET2021" and then saying he was right when something happened or didn't happen, when something related happened in 2024 mean that he predicted it right or wrong, or is everything subject to arbitrary interpretation? Where are the bold predictions? Sounds like a lot of fairly obvious predictions with a lot of wiggle room to determine if right or wrong.
NET2021 means that he predicted that the event would take place on or after 2021, so happening in 2024 satisfies that. Keep in mind these are six-year-old predictions.
Are you wishing that he had tighter confidence intervals?
If the predictions are meant to be bold, then yes. If they're meant to be fairly obvious, then no.
For example, saying that flying cars will be in widespread use NET 2025 is not much of a prediction. I think we can all say that if flying cars will be in widespread use, it will happen No Earlier Than 2025. It could happen in 2060, and that NET 2025 prediction would still be true. He could mark it green in 2026 and say he was right, that, yes, there are no flying cars, and so mark his scorecard another point in the correct column. But is that really a prediction?
A bolder prediction would be, say "Within 1-2 yrs of XX".
So what is Rodney Brooks really trying to predict and say? I'd rather read about what the necessary gating conditions are for something significant and prediction-worthy to occur, or what the intractable problems are that would make something not be possible within a predicted time, rather than reading about him complain about how much overhype and media sensation there is in the AI and robotics (and space) fields. Yes, there is, but that's not much of a prediction or statement either, as it's fairly obvious.
There's also a bit of an undercurrent of complaint in this long article about how the not-as-sexy or hyped work he has done for all those years has gone relatively unrewarded and "undeserving types" are getting all the attention (and money). And as such, many of the predictions and commentary on them read more as rant than as prediction.
Presumably you read the section where Brooks highlights all the forecasts executives were making in 2017? His NET predictions act as a sort of counter-prediction to those types of blind optimistic, overly confident assertions.
In that context, I’d say his predictions are neither obvious nor lacking boldness when we have influential people running around claiming that AGI is here today, AI agents will enter the workforce this year, and we should be prepared for AI-enabled layoffs.
The NET estimation is supposed to be a counter to the irrational exuberance of media and PR. E.g. musk says they'll get humans to Mars in 2020, and the counter is "I don't think that will happen until at least 2030".
> Systems which do require remote operations assistance to get full reliability cut into that economic advantage and have a higher burden on their ROI calculations
Technically true but I'm not convinced it matters that much. The reason autonomation took over in manufacturing was not that they could fire the operator entirely, but that one operator could man 8 machines simultaneously instead of just one.
All that verbiage about robotaxis and not a single mention about China, which by all accounts is well ahead of the US in deploying them out on the road. (With a distinctly mixed track record, it must be said, but still.)
It's far too rambly and vague to make any sense of the achieved results, I think.
For me this predictions are kind of being aware of how progress can happen based on history, but this will not lead to any breakthrough. I am not in the camp of being skeptic so I still like the hype cycle, they create an environment for people to break the boundaries and sometimes help untested ideas and things to be explored. This might not have happen if there is no hype cycle. I am in the camp of people who are positive as George Bernard Shaw in his 2 quotes:
In hindsight when we look back, everything looks like we anticipated, so predictions are no different some pans out some doesn't. My feeling after reading prediction scorecard is that you need a right balance between risk averse (who are either doubtful or do not have faith things will happen quickly enough) and risk takers (one who is extremely positive) for anything good to happen. Both help humanity to move forward and are necessary part of nature.It is possible AGI might replace humans in a short term and then new kind of work emerges and humans again find something different. There is always a disruption with new changes and some survive and some can't, even if nothing much happens its worth trying as said in quote 1.
>Individually owned cars can go underground onto a pallet and be whisked underground to another location in a city at more than 100mph.
I'm curious where this idea even came from, not sure who the customer would be, it's a little disappointing he doesn't mention mag-lev trains in a discussion about future rapid transit. I'd much rather ride a smooth mag-lev across town than an underground pallet system.
Yet such an underground system should exist to transport deliveries.
If someone wants to have a credible prediction scorecard, get it on some third-party platform like Metaculus, Manifold, GJOpen, Polymarket, ...
LOL about the last paragraphs:
> Let’s Continue a Noble Tradition!
> The billionaire founders of both Virgin Galactic and Blue Origin had faith in the systems they had created. They both personally flew on the first operational flights of their sub-orbital launch systems. They went way beyond simply talking about how great their technology was, they believed in it, and flew in it.
> Let’s hope this tradition continues. Let’s hope the billionaire founder/CEO of SpaceX will be onboard the first crewed flight of Starship to Mars, and that it happens sooner than I expect. We can all cheer for that.
How am I supposed to read this ? a thinly veiled hatred for Mr. Musk?
Related. Others?
Rodney Brooks Predictions Scorecard - https://news.ycombinator.com/item?id=34477124 - Jan 2023 (41 comments)
Predictions Scorecard, 2021 January 01 - https://news.ycombinator.com/item?id=25706436 - Jan 2021 (12 comments)
Predictions Scorecard - https://news.ycombinator.com/item?id=18889719 - Jan 2019 (4 comments)
Quite an unreadable web page, and somehow rationalising there was 'everything before me', and 'everything after me' with regard technology and prediction. Unfortunate understanding of reality really.
The next big thing beyond deep learning being LLMs is funny
> LLMs have proved amazing facile with language.
If you took a transcript of a conversation with Claude 3.6 Sonnet, and sent it back in time even five years ago (just before the GPT-3 paper was published), nobody would believe it was real. They would say that it was fake, or that it was witchcraft. And whoever believed it was real would instantly acknowledge that the Turing test had been passed. This refusal to update beliefs on new evidence is very tiresome.
The whole point of the post is that many have updated their beliefs too much.
Similarly if you could let a person from five years ago have a spoken conversation with ChatGPT Advanced Voice mode or Gemini Live. For me five years ago, the only giveaways that the voice on the other end might not be human would have been its abilities to answer questions instantaneously about almost any subject and to speak many different languages.
The NotebookLM “podcasters” would have been equally convincing to me.
Does it drive anyone else crazy when an author posts 15,000 words (yes, there are that many in this article) when 1,500 would have more than communicated the relevant information? The length of this article is almost comical.
It's long, so I'm skimming a little and... flying cars. If you don't know why we don't have flying cars, you're not a good engineer.
It really doesn't matter what prestigious lab you ran, as that apparently didn't impart the ability to think critically about engineering problems.
[Hint: Flying takes 10x the energy of driving, and the cost/weight/volume of 1 MJ hasn't changed in close to a hundred years. Flying cars require a 10x energy breakthrough.]
The article is responding to claims by CEOs of car companies, industry and business press, and other hype sources that keep predicting flying cars next year or so. It's predicting that, against this hype, it will not come to pass. Not sure why you've worded your comment in such a way as if the article was hyping up flying cars.
Not to mention, since we do have helicopters, the engineering challange of flying cars is almost entirely unrelated to energy costs (at least for the super rich, the equivalent of, say, a Rolls Royce, not of a Toyota). The thing stopping flying cars from existing is that it is extremely hard to make an easy to pilot flying vehicle, given the numerous degrees of freedom (and potential catastrophic failure modes); and the significantly higher impredictability and variance of the medium (air vs road surface).
Plus, the major problem of noise pollution, which gets to extreme levels for somewhat fundamental reasons (you have to diaplace a whole lot of air to fly; which is very close to having to create sound waves).
So, overall, the energy problem is already fixed, we already have point-to-point flying vehicles usable, and occasionally used, in urban areas, helicopters. Making them safe when operated by a very lightly trained pilot, and silent enough to not wake up a neighborhood, are the real issues that will persist even if we had mini fusion reactors.
Not quite. It's about 3x. It also depends on whether you're talking fixed wing or rotary wings.
A modern car might easily have 130 kW or more, and that's what a Cessna 172 has (around 180 hp). (Sure, a plane cruises at the higher end of that, while a car only uses that much to accelerate and cruises at the lower end of the range - still not a factor of 10x.)
As another datapoint, a Diamond DA40 does around 28 miles per gallon (< 9 litres per 100 km) at 60% power cruise.
The article is not optimistic on flying cars. The prediction is that an expensive flying car could be purchased no earlier than 2036, and notes a strong possibility that it won’t even happen by 2050. Plus states that minor success (aka 0.1% of car sales are flying cars) isn’t going to happen in his lifetime.
The author also expands on this:
> Don’t hold your breath. They are not here. They are not coming soon.
> Nothing has changed. Billions of dollars have been spent on this fantasy of personal flying cars. It is just that, a fantasy, largely fueled by spending by billionaires.
It’s worth actually reading the article before trashing someone’s career and engineering skills!
Engineering is about focusing on what matters. There's no point in talking about flying cars: they will exist when portable fusion exists, so just talk about that.
So you are saying that a true engineer doesn’t read articles and criticizes a successful engineer that wrote said article with hand-wavy arguments?
>>> [self driving cars are rmeote controlled] in all cases so far deployed, humans monitoring those cars from a remote location, and occasionally sending control inputs to the cars.
Wait, What now?
I have never heard this, but from the founder of CSAIL I am going to take it as a statement of fact and proof that basically every AI company is flat out lying.
I mean the difference between remote piloting a drone that has some autonomous flying features (which they do to handle lag etc) and remote driving a car is … semantics?
But yeah it’s just moving jobs from one location to another.
Note that even the examples he gives are related to things like an operator telling the car to overtake a stopped truck instead of waiting for it to start again. So occasional high level decisions, not minute-to-minute or even second-to-second interactions like you have when flying a drone.
This is more like telling your units to go somewhere in a video game, and they mostly do it right, but occasionally you have to take a look and help them because they got stuck in a particularly narrow corridor or something.
Nitpick: he's not the founder, not by far. He's just a past director of CSAIL
I don't know the motivation behind making robotics and AI predictions, as these things have been done to death since the 70s, but I know people who bet for high inflation made a killing in financial futures.
I am always a fan of people who pretend to have psychic powers.
Predict the future, Mr. Brooks!
So... he's not a fan of Elon Musk, I take it?
What you marked as hype is a flaw in your skill to recognize real world cases vs wishful thinking.
You are not predicting just daydreaming.
> It distorts where VC money goes, always to something that promises impossibly large payoffs–it seems it is better to have an untested idea that would have an enormous payoff than a tested idea which can get to a sustainable business
But this is the whole point of VC investing. It is not normal distribution investing.
what a weird writer, lots of interesting things to talk about but this very long essay continued to circle back to being author-self-obsessed with their own prowess and drawing out huge expositions and bullet lists on how well they are at predicting things. Call it self-referential-appeal-to-authority.
Another perspective is that it is a person who takes great care/is very thorough, to examine and re-evaluate his reasonings, and makes an effort to explain the logic in his reasoning, which can be helpful if you are trying to figure out if you agree or disagree.
It is odd. The product of a mind which clearly thinks very highly of itself.
> A robot that has any real idea about its own existence, or the existence of humans in the way that a six year old understands humans
It seems to me we’re at the very least close to this, unless you hold unproven beliefs about grey matter vs silicon.