We don't use AI to help write code due to copyright concerns, it's against our policy. We obviously need to be very careful with what we're doing, and we can't be sure it hasn't seen Apple docs or RE'ed Apple binaries etc (which we have very careful clean-room policies on) in its training data. It also can't be guaranteed that the generated code is GPL+MIT compatible (as it may draw inspiration from other GPL only drivers in the same subsystems) but we wish to use GPL+MIT to enable BSD to take inspiration from the drivers.
Given that literally no one is enforcing this it seems like a moral rather than a business decision here no? Isn’t the risk here that your competitors, who have no such moral qualms, are just going to commit all sorts of blatant copyright infringement but it really doesn’t matter because no one is enforcing it?
I don't see open source as having "competitors". If someone wants to make a fork and use AI to write code (which I also think wouldn't be very useful, as there's no public documentation and everything needs to traced and RE-ed), they are welcome to. We're interested in upstreaming though, which means we need to make sure the origin of code and licence is all compatible and acceptable for mainline, and don't want to infringe on Apple's copyright (which they may enforce on a fork with less strict rules than ours).
I get “fear of being sued or decoupled from the upstream project” for sure. It definitely speaks to the sad state of affairs currently when companies at Apple’s scale simply operate with complete impunity at copyright law when it comes to using AI (you think Apple isn’t using stuff like Claude internally? I can 100% guarantee you they are) but are able to turn around and bully people who might dare to do the same
I’ll believe it when I see a court case of them going after someone for some ai generated slop and they win. Don’t see much evidence of that happening right now, or really ever since the advent of these things
Why would any serious project want to risk being the legal guinea pig for that experiment? And to what end? Everyone is pretty much in agreement that reusing code you're not licensed to use is bad for open source and just an all around shitty thing to do.
AI wouldn't work here. The OP task was converting one open source driver in to another one for FreeBSD. Since Mac doesn't have open source drivers to start with, a person still has to do the ground research. At least until you can somehow give the AI the ability to fully interact with the computer at the lowest levels and observe hardware connected to it.
If I was someone on the run, then I would just get a fake license plate. They record plates on the interstates as well. Also, they have cameras and presumably can alert of a certain make and model + color car trailer on AI near a last seen area. Only way to bypass that is by swapping cars or getting a really generic popular car.
I still believe in Cameras. I have a comma.ai 3x and it works really well. Just get a thermal camera to deal with fog etc. Waymo has some of the same limitations with cameras that Comma and Tesla does.
There's no reason to believe in just cameras. Cameras are easily blinded by glare and have their efficacy drop dramatically when they get dirty. Having inexpensive lidar AND cameras is the best of both worlds. When it comes to safety and comfort, we shouldn't be trying to optimize for cost. If we figure out how to make cameras alone bulletproof in the future, great. But there's not where we're at today.
I think that supports most people’s viewpoint though. Visible light Cameras alone can ‘work’ but more sensors is of course better. You infrared example for instance.
The only reason not to have more sensors of different types is cost (equipment and processing costs). Those costs are coming down fast.
For a smaller display, have you tried the reMarkable 2? Also, perhaps buying an anti glare oled monitor with high brightness and nits (e.g. LG G4 brightness) would help with the price issue.
I used to check them out from the military library to read as a teenager – the books looked cool, official in their white bindings, and I loved the facts and descriptions of countries.
Sounds like they were bluffing and trying to coerce the researcher in to signing an NDA. I wouldn't of signed and they wouldn't have reach in the US and presumably Germany where the researched is based. Also, I'm glad the affected vendor isn't DAN.
In my experience dealing with e.g. Amazon Prime Video customer service, the actual people working on customer service can't do those jobs well either. As an example, I've complained multiple times to them about live sports streams getting interrupted and showing an "event over" notice while the event is still happening. It's a struggle to get them to understand the issue let alone to get it fixed. They haven't been helpful a single time.
So if AI improves a bit, it might be better than the current customer service workers in some ways...
reply