259

A U.K. woman was photographed standing in a mirror where her reflections didn't match, but not because of a glitch in the Matrix. Instead, it's a simple iPhone computational photography mistake.

you are viewing a single comment's thread
view the rest of the comments
[-] aeronmelon@lemm.ee 14 points 1 year ago

It's a really cool discovery, but I don't know how Apple is suppose to program against it.

What surprises me is how much of a time range each photo has to work with. Enough time for Tessa to put down one arm and then the other. It's basically recording a mini-video and selecting frames from it. I wonder if turning off things like Live Photo (which retroactively starts the video a second or two before you actually press record) would force the Camera app to select from a briefer range of time.

Maybe combining facial recognition with post processing to tell the software that if it thinks it's looking at multiple copies of the same person, it needs to time-sync the sections of frames chosen for the final photo. It wouldn't be foolproof, but it would be better than nothing.

[-] xantoxis@lemmy.world 36 points 1 year ago

Program against it? It's a camera. Put what's on the light sensor into the file, you're done. They programmed to make this happen, by pretending that multiple images are the same image.

[-] ninekeysdown@lemmy.world 3 points 1 year ago

That’s over simplified. There’s only so much you can get on a sensor at the sizes in mobile devices. To compensate there’s A LOT of processing that goes on. Even higher end DSLR cameras are doing post processing.

Even shooting RAW like you’re suggesting involves some amount of post processing for things like lens corrections.

It’s all that post processing that allows us to have things like HDR images for example. It also allows us to compensate for various lighting and motion changes.

Mobile phone cameras are more about the software than the hardware these days

[-] cmnybo@discuss.tchncs.de 11 points 1 year ago

With a DSLR, the person editing the pictures has full control over what post processing is done to the RAW files.

[-] ninekeysdown@lemmy.world 1 points 1 year ago

Correct, I was referring to RAW shot on mobile not a proper DLSR. I guess I should have been more clear about that. Sorry!

[-] uzay@infosec.pub 2 points 1 year ago

You might be confounding a RAW photo file and the way it is displayed. A RAW file isn't even actually an image file, it's a container containing the sensor pixel information, metadata, and a pre-generated JPG thumbnail. To actually display an image, the viewer application either has to interpret the sensor data into an image (possible with changes according to its liking) or just display the contained JPG. On mobile phones I think it's most likely that the JPG is generated with pre-applied post-processing and displayed that way. That doesn't mean the RAW file has any post-processing applied to it though.

[-] randombullet@feddit.de 1 points 1 year ago

Raw files from cameras have meta data that tells raw converters the info of which color profile and lenses it's taken with, but any camera worth using professionally doesn't have any native corrections on raw files. However, in special cases as with lenses with high distortion, the raw files have a distortion profile on by default.

[-] ninekeysdown@lemmy.world -1 points 1 year ago

Correct, I was referring to RAW shot on mobile devices not a proper DSLR. That was my observations based off of using the iPhone raw and android raw formats.

This isn’t my area of expertise so if I’m wrong about that aspect too let me know! 😃

[-] ricecake@sh.itjust.works 2 points 1 year ago

What's on the light sensor when? There's no shutter, it can just capture a continuous stream of light indefinitely.

Most people want a rough representation of what's hitting the sensor when they push the button. But they don't actually care about the sensor, they care about what they can see, which doesn't include the blur from the camera wobbling, or the slight blur of the subject moving.
They want the lighting to match how they perceived the scene, even though that isn't what the sensor picked up, because your brain edits what you see before you comprehend the image.

Doing those corrections is a small step to incorporating discontinuities in the capture window for better results.

[-] nyandere@lemmy.ml 1 points 1 year ago* (last edited 8 months ago)
[-] Petter1@lemm.ee -3 points 1 year ago

Or maybe just don’t move your arm for literally less than a second while the foto(s) is/are taken.. Moving your arm(s) down happens in less than a second if one just let them fall by gravity. It’s a funny pic nonetheless.

this post was submitted on 01 Dec 2023
259 points (82.1% liked)

Technology

59674 readers
1980 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS