A malevolent activist can thus easily pretend a manipulated picture was taken at their current time and location. ![]() In reality it was screen-captured days ago somewhere else. Proof Mode just created a proof that this picture was recorded right now on this device at the current location. So I just connected the phone to my laptop via an USB cable, copied this Chuck Norris fact into the DCIM folder and a couple of seconds later I had a “proof” for it. Looking through the code I noticed that it doesn’t even react on an actual camera event (Android has ACTION_IMAGE_CAPTURE for this), but just watches for changes in the standard camera data folders (the DCIM folder both on internal and external media) every ten seconds and signs everything that looks like an image or video file. The Proof Mode app doesn’t records pictures and videos by itself, it just signs other applications’ data. And Proof Mode offers plenty of ways to generate fake proofs. If Proof Mode made it possible to easily generate a proof for fake data, then all trust in the system would vanish. No third party every gets to see the data, the only person vouching for its authenticity is yourself. All data sources, from the camera picture to the location data, can easily be faked. Many publishing companies now require photographers to follow strict guidelines after some major hoaxes slipped through, but even the Nikon Image Authentication System has its flaws. High-End Nikon and Canon cameras also offer image authentication mechanisms built right into the hardware to detect manipulation. Police Body cameras for example are black boxes, you can only turn them on or off and download the data, but nothing else, and many do implement additional hardware security measures to prevent and detect tampering. You basically need the whole device to be a certified, tamper-proof box, and this is how professional systems are built. The problem with proving something is that there has to be a full chain of trust from the beginning to the end. An attacker trying to destroy the trust in the whole proof system.An attacker simply taking the device away or destroying it, so they key pair no longer exists and parts of the sensor data (MAC addresses, device ID) can no longer be checked.An attacker trying to frame the activist by generating a proof for data which the activist has never recorded.Malevolent media trying to pass fake data as authentic (happens all the time, this is daily business for all the “Fake News” websites out there).A malevolent activist trying to pass fake data as authentic (happens all the time).An attacker (the government etc.) trying to disprove that the data is authenticīut this leaves out the following attack vectors:.The media trying to prove to the general public that the data sent in by the activist is authentic.The activist trying to prove that something has happened.I get the general idea, but Proof Mode only seems to think about three involved parties: This proves that the files and their contents were signed by the given key and that their contents haven’t been modified. ![]() Gpg: Signature made Tue Mar 7 14:37:56 2017 CET Gpg: assuming signed data in 'IMG_20170307_' Gpg: Signature made Tue Mar 7 14:37:55 2017 CET ![]() Let’s check my signatures: $ gpg -verify IMG_20170307_
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |