20

I saw a news report about now freely available software to make "deepfake" videos. Couldn't videos be internally marked using a private key, so that everyone could verify the originator using a public key? This could be built in to browsers so that everyone could see if something was fake or not.

Is it technically possible to mark a video stream throughout with something that can't be spoofed or removed? Then if a video had no mark, we would know it was rubbish.

  • 37
    It isn't a technical problem It's a social problem. What would knowing the source improve? It's not like deep fakes are posted on whitehouse.gov... – vidarlo Mar 21 '22 at 10:51
  • 1
    There is a similar question about photos and signing them right in the camera: https://security.stackexchange.com/questions/212957/is-a-cryptographically-signing-camera-possible – Robert Mar 21 '22 at 12:36
  • 2
    How does knowing who the "originator" of the video is tell you anything about whether it's "rubbish"? Anyone can make a real video, and anyone can make a fake. – Ajedi32 Mar 21 '22 at 20:08
  • 1
    Why do people keep asking this about deep fakes? Deep fakes are making it cheaper to insert false information (also known as "lying") into video. How does cryptographically signing an email stop me from lying in an email? How does cryptographically signing a photo prevent me from photoshopping it? We already have this anyway, if a video is posted on senate.gov or on senate.gov's YouTube channel, you can be pretty sure it came from the US senate, but it can still be a deep-faked or generally contain falsehoods. – Boris Verkhovskiy Mar 21 '22 at 21:37
  • Perhaps we could use Blockchain to prevent videos from being modified or spoofed? – EasyWhenUknowHow Mar 21 '22 at 22:01
  • 6
    @Boris If you assume that any official statement by POTUS would be signed by whitehouse.gov, then a video of POTUS without that signature should not be trusted. This prevents someone posting propaganda claiming that POTUS supported them. – Barmar Mar 22 '22 at 02:46
  • 4
    @EasyWhenUknowHow Blockchain doesn't stop modification and spoofing. All it means is one can verify that a certain version of the video was signed at a certain time. Whether another version is the original untouched version or a convincing edit can't be proven, unless, say, the footage contains a verifiable reference to time and the creator immediately sign that video. – Martheen Mar 22 '22 at 05:38
  • @Martheen If there was some question, wouldn't you refer back to the original? – EasyWhenUknowHow Mar 22 '22 at 09:56
  • 1
    Which is the original? What's stopping anyone from taking the original video, editing it, and then putting it in the blockchain? Even if the original video is put on the blockchain later, you have no way to tell which one is. – Martheen Mar 22 '22 at 10:09
  • @Martheen I thought that the whole point of Blockchain was to hold a verifiable, traceable sequence of transformations? If it doesn't do even that much, then it is completely useless. – EasyWhenUknowHow Mar 22 '22 at 10:19
  • That's only possible if all action can only be done directly on the blockchain, such as transaction, or writing text. Video capture is not one of them, at least, not with any practical bandwidth. Plus, nothing stops fakers from just pointing the lens into a retina display playing doctored video. – Martheen Mar 22 '22 at 10:56
  • 2
    @EasyWhenUknowHow the point of blockchain is to solve the "double spend" problem. That is, the problem of ensuring that in a decentralised digital cash system, it is not possible to send the same funds to two different parties (or equivalently, establishing which of the two parties received the funds, by determining which transaction occured first). It has been extended to solve other related problems (such as Zooko's conjecture), but a key limitation is that it can only prove statements about events that happen within the system. It cannot testify about the world outside. – James_pic Mar 22 '22 at 11:03
  • 1
    @Barmar Either people check the origin of the videos they watch, in which case posting them on `whitehouse.gov` is enough, or they don't care and will happily trust a video on `whitehouse.fake` signed by `whitehouse.fake`. – Dmitry Grigoryev Mar 22 '22 at 11:07
  • @James_pic This seems to be a general, still unsolved problem with systems. Some people say it will never be solved. I say, just make the system bigger, to encompass more things. – EasyWhenUknowHow Mar 22 '22 at 11:56
  • 4
    Unless everyone is living in a simulation, you can't simply "make the system bigger", and actually doing that would involve the total death of privacy because every single activity would be recorded. – Martheen Mar 22 '22 at 15:24
  • @Martheen Maybe actions just need to be verifiable under the right conditions? That doesn't remove privacy any more than it is now. A bigger system is one with no 'outside', basically. It includes everyone. – EasyWhenUknowHow Mar 22 '22 at 17:32
  • 1
    How do you tell something is the *right condition*? Someone sleeping with their date is potentially damaging, and so would the deepfake of it, so should every time someone opens their pants their act get logged in an immutable, publicly accessible ledger? How does it not remove the entire concept of privacy? – Martheen Mar 23 '22 at 00:18
  • 3
    The impossibility of "making the the system bigger" needs to be re-stated. A blockchain can only decide whether data submitted to the chain is consistent with other data already on the chain. If someone has an affair, or commits war crimes, this is a thing that happens in the real world and that the blockchain can't prevent, or independently detect or verify. Data about this only gets onto the chain if someone puts it there, and by that time it's second hand information. This is generally known as the oracle problem, which has no general solution. – James_pic Mar 23 '22 at 09:57
  • I'm not sure that *any* problems have general solutions. – EasyWhenUknowHow Mar 23 '22 at 16:27
  • Well, this exceeded all my expectations of anthill-kicking. Normally any Questions I ask are attacked, downvoted and closed within hours, so, hey, thanks! Sorry I can't vote or accept an answer. – EasyWhenUknowHow Mar 27 '22 at 21:27

9 Answers9

65

In theory, yes. Signing a video file with a private key and then publishing the public key is no different than signing some text and then publishing it.

But this doesn't really solve the problem. For example, imagine someone filmed a video of me putting on two differently colored socks - which, as we all know, is one of the worst imaginable crimes. The person, who shot the video, signs it with their private key and publishes it.

Now I vehemently deny the legitimacy of the video, saying it was obviously faked and I would never wear differently-colored socks. As it turns out, my claim was correct. Someone shot a video of me wearing socks and then modified the video file, to alter the color of one of my socks. He then signed this modified file and published it.

As you see from this example, signing a video really doesn't "verify" that the content of the video is "legitimate" in one way or another.

In fact...

it makes things even worse. "Deep Fakes" are primarily a social problem, meaning that a significant amount of people believe that the content of the video is real, despite it not being so. You cannot fix social problems with technology, as that tends to create even more social problems.

By adding a simple green checkmark of "Legitimate" to a video, you essentially teach people not to engage their brains and question what they see, and instead create a shortcut of "checkmark = truth". And while some people might not be fooled by it, keep in mind that propaganda doesn't need to work on everyone, just enough people.

In short: The best way to combat deep fakes is to teach people to think critically.

  • Comments are not for extended discussion; this conversation has been [moved to chat](https://chat.stackexchange.com/rooms/135019/discussion-on-answer-by-mechmk1-could-videos-be-authenticated-using-private-and). – schroeder Mar 24 '22 at 15:58
31

Yes, you can use cryptography to authenticate video sources.

No, you cannot use cryptography to prove a video is authentic (and prevent deepfakes).

If these statements sound contradictory, read on.


What has been tried

This has been done in the past by camera manufacturers like Nikon and Canon, who wanted to assure pictures taken with the cameras could be "authenticated".

The Nikon Image Authentication System was quickly broken by a simple firmware dump attack.

It's important to note that virtually all known schemes that attempted to provide security against a physical attacker have failed so far (sometimes in secret, for a while). The general saying is that "If an attacker has physical access to my machine, all bets are off".

How good could it possibly get?

If someone developed a perfectly secure system that resisted attacks for many years (with the help of tamper-proof chips / TPMs), it would only ever be able to verify something very narrow: that the produced image/video was indeed processed by a certain company/manufacturer, or certified as true from a certain government / news source

This is useful for ensuring provenance (who says what?) in some controlled scenarios, but doesn't help at all for ensuring authenticity (what is real?) of the content.

Things this system doesn't prevent include:

  • A government certifying a fake video as genuine, claiming other governments are lying.
  • A smartphone company being coerced by a government into producing a few phones that allow secret fakes to be produced (by threatening company employees, for example)
  • A criminal stealing the signing keys from a company server and selling them to the highest bidder (if the receiver only uses it twice on obscure videos, the public will never know about the problem)
  • A person projecting a fake image into the certified camera lens (see analog hole)

What new problems are you creating?

So let's say these limitations don't bother you and you want to go ahead and build the system anyway. After all, no system is perfect, and something is better than nothing, right?

Once you create this system that certifies what is real, several things are likely to happen:

  • The local government will eventually become the decider of which images are true or false (through direct control or regulation)
  • Independent news media will gradually vanish (there's no need for alternative viewpoints when everyone trusts the same source)
  • Over time, people will stop doing any research. Scams will become more prevalent (since people don't practice critical thinking skills).
  • People that hold beliefs that contradict the state narrative will be ostracized

tl;dr: Cryptography is useful, but it can't replace trust. Centralizing decisions about trustworthiness leads to poor outcomes

loopbackbee
  • 5,308
  • 2
  • 21
  • 22
  • What would lead to a true sense of security? – EasyWhenUknowHow Mar 21 '22 at 21:55
  • 1
    @EasyWhenUknowHow **Trust** leads to a true sense of security. *Which* trust depends on who you ask: trust in a trusted-third-party (God, government), trust in a web-of-trust, trust in your social circle, trust in your own research... – loopbackbee Mar 21 '22 at 22:16
  • So who should most people trust, say, in the US? – EasyWhenUknowHow Mar 21 '22 at 22:33
  • 2
    @EasyWhenUknowHow Not the right question. In this context, the right question is "Who _will_ they trust, collectively?" And the answer is "nobody". Everyone will trust only those sources (including "signature sources") that corroborate their own head canon and call fake news on any other source. This cannot be solved with any technology. This is a social dynamic. – orithena Mar 22 '22 at 11:19
  • @orithena "*The majority is always sane.*" That's the definition. – EasyWhenUknowHow Mar 22 '22 at 11:58
  • 2
    @EasyWhenUknowHow That definition does not hold up in the real world, especially not in information security. And evolution (as far as you can apply its principles to this context at all) would only lead to the survivors declaring their own past views as "sane" at some point -- because, "well, we survived, so our views must have been the sane ones". – orithena Mar 22 '22 at 12:12
  • @orithena It seems to be becoming difficult to have a collective world. But then, perhaps it always was. Let's try to make it easier, somehow. – EasyWhenUknowHow Mar 22 '22 at 12:16
  • 1
    @EasyWhenUknowHow It always was, is, and will be, as long as you're dealing with humans. "Making things easier" is a game of whack-a-mole that never ends. – orithena Mar 22 '22 at 13:23
  • 2
    @EasyWhenUknowHow "Having a collective world" has always been easy for small groups, and impossible for large enough groups. Focusing more on the people and things closer to you and ignoring things that don't affect you **directly** and **in the present** helps alleviating this perceived problem. There's no point asking "who should most people trust in the US" - you're not in a position to decide and you can always trust other people to make their own (good and bad) decisions. – loopbackbee Mar 23 '22 at 00:14
  • Perhaps it's more a matter of deciding when to distrust someone or something, otherwise the default position is to not worry about large things that would be obvious if they started to go wrongly? How do we help people choose the correct things to distrust while just ignoring most everything else? – EasyWhenUknowHow Mar 23 '22 at 00:55
  • @EasyWhenUknowHow If you try to help people that way, you will inevitably cause some people to mistrust your teachings. Not only because you interrupt their world view and create cognitive dissonance in them, also because you will piss off some people in power who suspect that your teachings will diminish their power. They only need to publicly call you e.g. a communist (as an example applicable in the US) and you're off worse than what you started with. But this is really getting off topic here, this is becoming a topic for Politics SE and/or Psychology SE. – orithena Mar 23 '22 at 10:37
  • The OP did not suggest that videos be marked with a 'green check-mark'. Your section on 'new problems you are creating' applies only to those cases. There's no reason not to cryptographically verify the provenance of a video if that information is available. Adding 'green check-marks' to 'approved' sources is a separate matter, and it's something that social media platforms just do anyway. – Myridium Mar 24 '22 at 00:29
6

This is analogous to how big media companies and some governments "think" - there is good content, and there is bad content. The good content is somehow marked as good, anything else is bad. Good people control the "good" mark. All problems solved.

Why the evil bit-type solutions don't work is more or less widely known.

Why the ministry of truth-type solutions are risky and of limited use is also known.

Besides, the deepfake technology is just one more technology. We have cinema industry for a good century now. We have makeup experts that can paint anyone's face over an actor. Deepfake is not any better except maybe being cheaper and faster, to an extent.

On the other hand, the whole deepfake drama in the recent times stems from the possibility to publish an information anonymously. This is an important feature of more or less free societies and an important tool for keeping them free. In regrad to this, an occasional deepfake now and then are not that much of a high price.

What you propose will not kill deepfakes, it will kill anonymity. This is pretty much not the same.


This is not to say that digital signing of a camera footage is not legitimately used for other purposes. Security cameras, dash cameras, professional reporter cameras do this (as well as extensive digital signature-based timestamping). This (especially the third-party timestamping) in some cases can be used to prove authenticity.

fraxinus
  • 3,425
  • 5
  • 20
  • If you can't trust the ministry of truth, who can you trust? – EasyWhenUknowHow Mar 22 '22 at 10:34
  • 6
    @EasyWhenUknowHow Nobody. –  Mar 22 '22 at 10:49
  • So. @MechMK1, what country do you live in? Perhaps you need to move to a place where you can trust someone? Otherwise, you face Camus' famous question. – EasyWhenUknowHow Mar 22 '22 at 11:52
  • 1
    @EasyWhenUknowHow Move to Russia, China, N. Korea and friends? There you are expected to trust the government, but these places are generally where people move FROM. Or simply trust your God (if you happen to beleive in a single God). Well, I trust my wife (this makes family life easier and she is a goddess anyway). All other things get just the reasonable amount of credibility. – fraxinus Mar 22 '22 at 12:08
  • This is also my answer. – EasyWhenUknowHow Mar 22 '22 at 12:13
  • @EasyWhenUknowHow the "real" answer is presumably some kind of mix of people with different, opposing goals, different amounts of resources, etc. Trusting "nobody" leads you to a Truman Show/Solipsism pit of despair. But putting absolute trust in any single point is doomed to failure, nobody is honest all the time. So belief and trust must be a sliding scale of credulity. Is government A lying to me about gov. B having WMDs? Entirely possible. Is every government (and airline pilot, and scientist) conspiring to trick me into thinking the world is round, when its flat? Very very unlikely. – mbrig Mar 23 '22 at 20:18
5

Think about the private key distribution for a minute. Who do you think should be able to sign videos?

  • users registering freely. This doesn't give videos any additional trustworthiness: why would I trust a video signed by user@example.com?

  • users which confirm their identity. This would make things worse: users posting videos would lose anonymity. People will be intimidated to post fakes, but they will also be intimidated to post controversial or anti-government content. Plus, I still won't be able to decide with reasonable certainty whether to trust a video from John Doe, 42 Random str., Metropolis or not.

  • certified recording equipment. This would make fake videos somewhat harder to produce: a fake video maker will not be able to use just a video editor. You still won't be able to tell if the video was staged or not, whether it really displays the place and the people it claims to represent, etc. Plus, you'll automatically mistrust videos from users who don't have the certified equipment, which are not necessarily fakes.

  • certified organizations. Those don't need to sign their videos at all: they can simply publish their videos on their own website, and unless their website is cracked you can be sure those videos are made by them. If they decided to post a fake, they'll have to problem to forge the signature as well.

An of course, all these options still allow for scenarios where the private key gets compromised, or trusted parties become malicious under government pressure or criminal threats. Discussing these scenarios only makes sense if there is an option that works.

Dmitry Grigoryev
  • 10,072
  • 1
  • 26
  • 56
  • Perhaps we should use a one time pad, then publish it, say a month later? – EasyWhenUknowHow Mar 22 '22 at 12:01
  • @EasyWhenUknowHow And what problem would a one-time pad solve? The real issue with fake videos is *not* the proof of origin. E.g. I'm pretty certain you have written the comment above without any additional crypto. – Dmitry Grigoryev Mar 25 '22 at 09:47
  • I was just responding to the key exchange and key theft issues. If a typical video is only 'important' for a short time, but needs to be provable later, use one time pad to certify it (because it's unbreakable) and then say, "it was me" later by publishing the key. Isn't this some sort of plausible scenario? I seem to recall reading about it. You get unbreakable key, and verifiability, but secured for limited time. You can't have *everything*. (where would you put it?) – EasyWhenUknowHow Mar 25 '22 at 11:56
4

I agree with the others that have opined on this question that a signature on the video in itself does little to authenticate the video. What matters is who made the signature.

In that regard, it's similar to the system of PKI that we use to authenticate SSL certificates on the web. Our web browsers do not trust just any signature on an SSL certificate - our browsers only trust signatures made by certificate authorities (CA's) that our browsers trust. We put our trust in these CA's to authenticate certificates on the web, and if the certificate has a valid signature by a trusted CA, then we feel confident that the certificate is authentic.

Perhaps we could we have a similar ecosystem to authenticate videos, using 'video authorities' (VA's ?) that we trust. If I trust @EasyWhenUknowHow as one of my trusted VA's, and the video has a valid signature by @EasyWhenUknowHow, which I've verified using EasyWhenUknowHow's public key, then I can feel confident that the video is authentic. I smell a business opportunity...

mti2935
  • 19,868
  • 2
  • 45
  • 64
  • 1
    We already have such a system. of sorts. It's called "code signing". https://en.wikipedia.org/wiki/Code_signing. There is no general reason why this could not get applied to just about any blob, not only software executables. I am not sure however, whether current video formats support such signing without breaking compatibility with older players. – Marcel Mar 21 '22 at 15:06
  • 2
    @Marcel Most modern container such as MP4 and MKV supports arbitrary tags, that can be used for signing. – Martheen Mar 22 '22 at 05:34
  • @Marcel Finally, posting a video on YouTube will become as easy as deploying a software package! /sarcasm – Dmitry Grigoryev Mar 24 '22 at 14:56
  • Seriously though, how would the process of signing a video by a video authority look like? Will VAs be the only ones who are allowed to post trusted videos, or will they somehow assess the videos people send to them? – Dmitry Grigoryev Mar 24 '22 at 15:00
  • @DmitryGrigoryev The latter. If the VA deems the video to be legitimate, then they sign it. – mti2935 Mar 24 '22 at 15:14
  • @mti2935 That would mean the VA will either only accept videos from trusted sources (e.g. accredited journalists), or it will have to analyze each submitted video for trustworthiness. For the first solution you don't need a VA, verified Twitter accounts for trusted sources is enough. The latter is a non-trivial task IMO. – Dmitry Grigoryev Mar 25 '22 at 07:23
  • 1
    @DmitryGrigoryev I was thinking of the second case, and yes it is definitely non-trivial. – mti2935 Mar 25 '22 at 10:35
3

Any person or organization X can sign any data (a video or otherwise) attesting various things, including...

  • Claiming to be the creator.
  • Claiming the content is true.
  • Claiming that it really is them in the video.

But the signature can be applied whether or not the claim is true or false. So, the signature by itself does not prove if the content is real or not. It only proves that someone claims it is, and possibly who is making the claim.

So how do we use that?

  1. We can prove when someone supports a claim of veracity, but not when they deny it.

If a party wants to deny any involvement with some content, then they simply can refuse to sign it. That is an open problem. But conversely, if a video featuring person X is signed by person X, then we can at least know that X is claiming it's really them.

With respect to "Deep Fakes" there is little practical difference between...

  • A fake video of person X claiming opinion Y really signed by person X.
  • A real video of person X claiming opinion Y really signed by person X.

In both cases the signature itself proves that X claims Y. One can't know if X is lying, but that would also be true if you were in the room with them as they said it.

In summary we can't know if a video of X is fake if X claims its fake. But we can know if X claims it's true.

  1. Things can change if there is a policy of signing all "offical" content.

Then if a video had no mark, we would know it was rubbish.

That's an interesting proposition. A lack of a signature doesn't prove the content is fake, but it does prove that person X did not choose to claim it is true.

If person X had a policy of always signing any official content, and the content is merely claiming that X has opinion Y, then a lack of a signature by X is a pretty strong indicator that X doesn't claim to have that opinion.

  1. A signature allows a claim of veracity to live on at a later date.

Often times we get content (videos included) from trusted websites. But the content often gets copied and lives on long after it's taken down from the website. A signature can prove that that a specific website at one time claimed to have hosted it.

  1. We can prove that a party once claimed that content was true, even if they now deny they ever made that claim.

Public figures often proudly put out content only to later deny its existence.

Just knowing that someone claims something can actually be useful by itself, whether or not the claim is true. For example, politician X wants to unequivocally pronounce their support of popular cause Y by signing some content and putting it on their website. 10 years later, cause Y is really unpopular, and the content is removed from their website.

Now politician X want to deny that they ever really supported cause Y, but their opponent digs a signed copy out of an internet archive (proving that X is now either lying or really did support Y).

A shrewd politician manages to talk a lot without every really saying anything, and certainly never digitally signs something. But not all of them are that wise, and this use-case would probably occur quite a lot.

  1. We can decide how much to trust the content based on how much we trust the signer.

In general, we can't know for sure if the content of a video is true, but we might try to assess the probability of veracity, based on how much we trust the signer. It's not perfect, but it's a useful heuristic that humans use all the time.

Jonathan Cross
  • 1,548
  • 1
  • 12
  • 25
user4574
  • 443
  • 2
  • 6
  • 1
    This is a very good Answer covering a lot of useful aspects I had not thought of! (And you can quote me on that) – EasyWhenUknowHow Mar 23 '22 at 16:14
  • 1
    "lack of a signature by X is a pretty strong indicator that X doesn't claim to have that opinion" No, it indicates that video lacks any evidentiary value about anything. There may simultaneously exist a video making the same claim that IS signed by X and one that is not (because someone produced their own video, or simply took the signed one and stripped out the signature). – Ben Voigt Mar 23 '22 at 16:55
  • "A lack of a signature ... does prove that person X did not choose to claim its true." No, because their opponent can take the signed video, strip the signature, and release the very same video sans signature. Lack of a signature proves nothing. It is the absence of proof. – Ben Voigt Mar 23 '22 at 16:59
  • 1
    @BenVoigt You are right. The lack of a signature on a specific item doesn't prove anything one way or the other. But if the signer wishes it to be known that they did sign it, the remedy is simply to provide the signed item to whoever they want to present proof to. I guess it would have been more accurate to say "the fact that a signed copy can't be easily obtained" would be the true indicator. – user4574 Mar 24 '22 at 03:09
  • @BenVoigt The question said that the proof should be distributed throughout, and be unspoofable and not removable. – EasyWhenUknowHow Mar 24 '22 at 10:18
  • One important distinction: a signature never proves that a *specific human* signed the data. It indicates that a *particular key* was used to create the signature. The key could have been stolen, used by somebody else, etc. This is important because even with digital signatures, there's significant problems connecting a particular public key to a specific human. – Jonathan Cross Mar 24 '22 at 12:07
  • @EasyWhenUknowHow: Ok, the adversary may not be able to remove proof that a signature once existed, but they can trivially cause the signature to fail validation. Presence of an invalid signature proves nothing. Consider a watermark. An image editor can't undo the watermark, because it doesn't have the data that the watermark obscures. But it can corrupt the watermark by blending a new layer over the top of it. For a cryptographic signature that is intended to detect tampering, producing a video that fails validation will be quite trivial. – Ben Voigt Mar 24 '22 at 15:37
1

Yes, this is possible and there is already an initiative to implement this called The Coalition for Content Provenance and Authenticity (C2PA) containing major industry partners.

From their summary, trusted hardware will digitally sign information about a media object (image or video) when it's created, and as the object is modified those transformations are also signed.

crypto
  • 41
  • 1
0

There are two very different points here.

  1. Can we sign digital videos:

    Obviously yes, a video file is nothing but a file and it is possible to take a hash of it and sign it. From that point there is an unbreakable bond between the signed file and the signer

  2. What will that guarantee:

    The video will deserve exactly the same trust that the signature does. If a well known press company signs a document (whatever the content) and you do trust that company, then you can trust the document. If somebody you do not know signs the document with a reputable non-repudiation signature emitted by a reputable CA, that means that an individual human being will be liable for the content of the file and that legal actions could be used is it was a fake - well if all that occured in a reputable country... Less strong that the first use case, but the fear of legal actions is usually enough to calm down many people. If the signature has no legal value (or only little one depending of the country it was emitted in), it just means that the document has not been modified since it was produced. Nothing less nothing more. Specifically it makes no evidence that it is or not a fake.

An interesting outcome of this is that you cannot full rely on a browser for that. You can find certificates emitted by reputable CA for Chinese or Russion citizens. As far as I am concerned, I would not trust a video about Ukraine war if it was by a Russian organization or citizen. Because I would not trust the Russian legal courts to have the same sight on truth that I would have...

Serge Ballesta
  • 25,636
  • 4
  • 42
  • 84
0

When signing code packages, one of the things we often have to obtain is a signed timestamp from Digicert or someone. If the timestamp could be obtained from a reputable source and cryptographically linked with the video, you would have a basis to prove that the video was generated at the purported time of the event, where a deepfake would likely take a significant amount of time to generate.

That doesn't solve everything, of course. You could have a pre-trained deepfake model ready to go, or you might not be able to prove the time of the event in the video, but it would go a long way to proving authenticity. I do think MechMK1's answer makes an important point in saying that trust and opinion often matter more than proof.

  • Yes, time is important in making something verifiable, like the old trick of putting today's newspaper in a photo. Location is important. But person is most important, most likely to be altered, in the technology I saw described, and we still don't have any good means of proving identity! For anything! Why in the world hasn't someone solved that yet? Is it so hard? – EasyWhenUknowHow Mar 24 '22 at 01:17
  • One non-computer way of proving identity is to have your dog recognize you. This is pretty much unspoofable, but difficult to carry on a plane (or an escalator, for that matter). Still, it points toward a form of ID that could be implemented. Apparently now the fingerprint ID methods only unlock one device, so the fingerprint data is not transmitted or stored elsewhere. It's a start... – EasyWhenUknowHow Mar 26 '22 at 12:37