Can we get some kind of simple integrity check for plugins? And maybe real protection someday?

When contemplating shipping a commercial plugin, I’m concerned that people will hack around my (homegrown) license checks.

So there ought to be a way to sign a plugin (OPTIONALLY! let’s not get into the CEP signature nightmares) with a private key that only the developer and Adobe share, and which XD could check when loading the plugin. If the signature check fails, then the plugin can’t be loaded.

That should stop people from hacking the source and working around licensing code, or breaking stuff in general.

Others?

Of course, it doesn’t help protect intellectual property, so people could still read the licensing code and try to reverse engineer licenses; I guess obfuscation would help a bit, or even just complex algorithms which are hard to understand. Security through obscurity.

So the ultimate answer is to have plugins whose code is protected, again, perhaps by having a private key (shared between Adobe the developer only) that is used to decrypt the plugin at installation time.

2 Likes

While I agree that something like this would be necessary, I’m not quite sure how a key system would work (especially at installation time).

As soon as a plugin is installed (and this is – the way the APIs are set up, especially with file access inside the plugin folder – necessary, I think), it’s just a simple folder with all the contents “plainly visible”.

Even if there was some sort of key, I’d only have to change your plugin id in the manifest.json (no big deal) and XD wouldn’t be able to differentiate between your and “my” plugin (and therefore wouldn’t know which key it should match).

The only way I can think of that would prevent others from inspecting, changing and circumventing your code would be a “compiled” plugin format putting at least all of the source code and a manifest into some sort of encrypted file, but in all those cases, we’re talking about security by obscurity here. Having said that, although it of course isn’t good practice, if there’s no better way than said security by obscurity, that’s something we can already do in our plugins (e.g., there are probably build tools out there that mess up your code with integrity and licensing checks so badly that it would be easier to rebuild a plugin than crack it.

Don’t get me wrong here: I absolutely want something like this (and am also voting for it), but personally, I don’t think there’s any “clean” way of having this the way the APIs work…

1 Like

No security through obscurity needed.

When XD installs the plugin, it would also download a decryption key from its database (set in console.adobe.io) and store that key in its own local encrypted database. The JS files could remain encrypted on disk, and then decrypted (with the key above) as they’re loaded into memory at start-up.

The cost would be negligible, and the result iron-clad, unless people started poking around with a machine-level debugger, and XD could defend against that on modern OSes.

Fair enough :wink:.

In that case, however, it would be important to debate which files to encrypt (an when – is it optional?). I use simple, anonymous (no user ids or anything like that) analytics for my plugins and have already gotten concerned reactions by users, but could always calm users by stating how anonymous my analytics are and that they can, of course, check my code to confirm that.

If files are encrypted, no one would know if and – more importantly – how plugins handle your data. With the target group – designers – often working under NDAs and therefore rightfully concerned about data getting submitted to an external party and in a time when – thankfully – there are users concerned with privacy, obscuring plugin code seems like a bad approach that could lower trust into plugins severely to me.

The only way to enable both sides of this would be to make this encryption optional and show a kind of “warning” about plugins using encryption. As a user, I could understand the reasoning for using this in a paid plugin from a known source (where it wouldn’t affect my trust), but I could “stay away” from free plugins (or other plugins where it wouldn’t be necessary or where I don’t trust the developer) using encryption, leaving me to guess what this plugin does to my data.

The thing is that it is – of course – sometimes necessary to use such systems, but I don’t think that this should be encouraged. For cases where it’s necessary, there of course isn’t another option (and for those cases, I’ve voted for this request). I do not think, however, that this should get encouraged in other cases (as I do – very much – believe transparency often outweighs security in these questions)…

PS: Please don’t see this as me saying I don’t think something like this should exist (I’ve voted for it, after all). I do, however, believe that it’s a question where it’s important to discuss the issue to find the best, or in this case, least bad solution for a problem where there’s no right or wrong.

99.99999% of the world’s population (designers in particular) couldn’t possibly dig into the sources (especially after webpack/minimization has run) and understand it (e.g., you could have malicious code that’s disguised as something else and would take a genius to spot), so I don’t think the argument of “open source means trustable” really applies.

1 Like

@cpryland
The thing is that the few that can do that are able to call out malicious plugins right now (personally, I often look through the source code of plugins. Partially to see whether I want to use it – privacy-wise – and partially since I like to check which plugins use my libraries).

Also, reading and interpreting a bit of JS is something not too few designers can do (and if not, there are often developers who work in the agency). The thing is that there’s no way for a user to know if a plugin “talks” with a server without looking at the source code (when not using some real cyber-security techniques to inspect the SSL-encrypted https requests a plugin can perform). Right now, I can get around that by looking at the code (and with many downloads, I can trust that the plugin hopefully doesn’t do anything malicious since there are people like me who actually check plugins for something like this). If even the code that the plugin runs is encrypted (and it isn’t optional so that plugins that don’t have anything to hide also have their things encrypted – if they want it that way or not), there is no way for me to trust any plugin of a less-well-known developer and therefore no way for me or a designer to use that plugin in an NDA context.

It is very easy to spot HTTP requests in plugins (and something I always do when installing a plugin) since somewhere, it has to use

  • a URL
  • fetch() or an xmlhttprequest

This is something I do check (and if only one person checks it and calls malicious stuff out, it can provide a certain level of trust), which means that at least right now, open source means trustable applies. If it – at some point – doesn’t apply anymore, we’ll have to think of something else that builds trust (plugins requiring permission to make HTTP requests?), but for now, it is the way it is.

Last, but not least, I’d like to mention that there are also many developers (including my colleagues, friends and myself) using XD, and web developers will hopefully know how to search for the criteria mentioned above in source code.

@kerrishotts @peterflynn

Any chance we could get some kind of plugin hacking protection in-built to XD?

This feature request has a fairly high vote level (only 6, but that’s high from what I’ve seen).