

Yeah I agree on these fronts. The hardware might be good but software frameworks need to support it, which historically has been very hit or miss.


Yeah I agree on these fronts. The hardware might be good but software frameworks need to support it, which historically has been very hit or miss.


Depends strongly on what ops the NPU supports IMO. I don’t do any local gen AI stuff but I do use ML tools for image processing in photography (e.g. lightroom’s denoise feature, GraXpert denoise and gradient extraction for astrophotography). These tools are horribly slow on CPU. If the NPU supports the right software frameworks and data types then it might be nice here.


I’ll need to give this a read, but I’m not super sure what’s novel here. The core idea sounds a lot like GaussianImage (ECCV '24), in which they basically perform 3DGS except with 2D gaussians to fit an image with fewer parameters than implicit neural methods. Thanks for the breakdown!
Their GPU situation is weird. The gaming GPUs are good value, but I can’t imagine Intel makes much money from them due to the relatively low volume yet relatively large die size compared to competitors (B580 has a die nearly the size of a 4070 despite being competing with the 4060). Plus they don’t have a major foothold in the professional or compute markets.
I do hope they keep pushing in this area still, since some serious competition for NVIDIA would be great.


GrapheneOS patches this behavior if apps match their Google play signature IIRC. This is a behavior that apps on the play store can opt into (basically they block operation if they aren’t installed via Play).
It was rather annoying until recently, since some apps require you to be on a certified Android install to find them in the Play store, but don’t actually check play integrity in the app. These apps when installed via Aurora wouldn’t work for me until Graphene patched this.
I do research in 3D computer vision and in general, depth from cameras (even multi view) tends to be much noisier than LiDAR. LiDAR has the advantage of giving explicit depth, whereas with multiview cameras you need to compute it, which has a fair amount of failure modes. I think that’s what the above user is getting at when they said Waymo actually has depth sensing.
This isn’t to say that Tesla’s approach can’t work at all, but just that Waymo’s is more grounded. There are reasons to avoid LiDAR (cost primarily, a good LiDAR sensor is very expensive), but if you can fit LiDAR into your stack it’ll likely help a bit with reliability.