![](/static/61a827a1/assets/icons/icon-96x96.png)
![](https://lemmy.world/pictrs/image/0d5e3a0e-e79d-4062-a7bc-ccc1e7baacf1.png)
I think there’s a sort of perfect storm that can happen. Suppose there are two types of YouTube users (I think there are other types too, but for the sake of this discussion we’ll just consider these two groups):
-
Type A watches a lot of niche content of which there’s not a lot on YouTube. The channels they’re subscribed to might only upload once a month to once a year or less.
-
Type B tends to watch one kind of content, of which there’s hundreds of hours of it from hundreds of different channels. And they tend to watch a lot of it.
If a person from group A happens to click on a video that people from group B tend to watch that person’s homepage will then be flooded with more of that type of video, blocking out all of the stuff they’d normally be interested in.
IMO YouTube’s algorithm has vacillated wildly over the years in terms of quality. At one point in time if you were a type A user it didn’t know what to do with you at all, and your homepage would consist exclusively of live streams with 3 viewers and family guy funny moments compilation #39.
So, keep in mind that single photon sensors have been around for awhile, in the form of avalanche photodiodes and photomultiplier tubes. And avalanche photodiodes are pretty commonly used in LiDAR systems already.
The ones talked about in the article I linked collect about 50 points per square meter at a horizontal resolution of about 23 cm. Obviously that’s way worse than what’s presented in the phys.org article, but that’s also measuring from 3km away while covering an area of 700 square km per hour (because these systems are used for wide area terrain scanning from airplanes). With the way LiDAR works the system in the phys.org article could be scanning with a very narrow beam to get way more datapoints per square meter.
Now, this doesn’t mean that the system is useless crap or whatever. It could be that the superconducting nanowire sensor they’re using lets them measure the arrival time much more precisely than normal LiDAR systems, which would give them much better depth resolution. Or it could be that the sensor has much less noise (false photon detections) than the commonly used avalanche diodes. I didn’t read the actual paper, and honestly I don’t know enough about LiDAR and photon detectors to really be able to compare those stats.
But I do know enough to say that the range and single-photon capability of this system aren’t really the special parts of it, if it’s special at all.