advertisement
Facebook
X
LinkedIn
WhatsApp
Reddit

Judge in Kyle Rittenhouse case needs clarity on how pinch-to-zoom works

The double homicide trial of accused Kyle Rittenhouse is currently on the go in the US and yesterday during the trial something quite bizarre happened.

No we’re not talking about Rittenhouse’s emotional display during questioning but rather a technicality that Judge Bruce Schroeder doesn’t seem to understand.

That technicality is how Apple enlarges or scales an image or video when you pinch-to-zoom. This issue appeared when prosecutors wanted to show footage of Rittenhouse shooting Joseph Rosenbaum. The issue is that the video is relatively small and as such prosecutors wanted to zoom in on the video so the jury could see it more easily.

“iPads, which are made by Apple, have artificial intelligence in them that allow things to be viewed through three-dimensions and logarithms,” the defence argued. “It uses artificial intelligence, or their logarithms, to create what they believe is happening. So this isn’t actually enhanced video, this is Apple’s iPad programming creating what it thinks is there, not what necessarily is there.”

Yeah we were left scratching our heads at that one as well.

What makes this even more bizarre is that Schroeder sided with the defence and tasked the prosecution with proving that Apple doesn’t use AI to manipulate images. Further to that, the prosecution was tasked with providing an expert to testify on the matter, in 20 minutes during a recess. That expert never appeared and the Jury was seemingly made to watch a tiny video on a 4K display.

Now, we aren’t arguing that a Judge should concern themselves with learning about how digital image processing works as they have more important things to do with their time. There is something to be said about siding with somebody who states that they don’t know what they’re talking about though.

With that having been said, the idea that pixels are being added when zooming in does hold merit although the idea that they could change the context of an image is a massive stretch.

This is because when you enlarge an image the number of pixels needed to fill an area of an image or video is increased but, the number of pixels within the original file aren’t changed and you can’t just plug anything you want into the gaps or you would change the image entirely. As such, image scaling technology is used to smooth things out and insure that images aren’t littered with empty spaces or incorrect colours when zoomed in.

This scaling can be accomplished through a variety of techniques including nearest-neighbour, bilinear and bicubic interpolation and algorithmic scaling. The gaps that are filled are the “best guess” of what would be in that space.

No matter which technique is used however, the source image is always the point of reference, and importantly, gaps are filled using surrounding pixels as a reference. The idea that you would see something that isn’t there in the original image is rather extreme though.

While the video below from Computerphile is old, it does explain how the basics of image scaling work rather simply and you can see that while the argument that pixels are added, the idea that Apple’s software is adding what it wants to add is preposterous.

The charges faced by Rittenhouse include first-degree homicide, first-degree intentional homicide, attempted first-degree intentional homicide and two counts of recklessly endangering safety.

[Image – CC 0 Pixabay]

advertisement

About Author

advertisement

Related News

advertisement