TLS, CAs, chains of trust and certificate pinning

I’ve been mocking Sun Tzu and trying to make 3D printing useful in my last articles. It’s time for some hardcore InfoSec action. More specifically how to prevent mobile apps eavesdropping.

When a TLS (Transport Layer Security) certificate is assigned, there is a trust chain that is created to verify everyone from the root CA (Certificate Authority) to the actual website’s certificate.

The chain itself is not verified. A given system, for example a web browser, will consider a server’s certificate as valid because it can build a valid chain with all the signatures and matching names according to X.509 which starts with a root CA that the client already has and ends with the certificate to validate (the server’s certificate). When a website is changing its certificate, the rest of the chain remain the same.

Signatures don’t create trust, they transport trust. It still has to start somewhere. Each client (either the browser or in the operating system) comes with a list of trusted CAs. These are the public keys and names of some entities which are deemed trustworthy. The user doesn’t choose them, the operating system or the browser come pre-loaded with these. The root CA are trustworthy if you trust them for not issuing certificates with fake information. When a CA signs a certificate for some entity, it is supposed to make sure that the public key that the CA puts in that certificate, along with the entity’s name, is really owned by that entity. Similarly, when a root CA delegates its power to another CA (intermediate CA), it makes sure through audits and binding contracts that the sub CA is trustworthy and that it will apply the same rules.

The Public Key Infrastructure relies on the client to know a priori a handful of public keys owned by trusted CA and that they implicitly trust everything that these CA signs. The certificates assume a tree-like structure, with the root and sub CAs as the tree and the end-entities, the TSL servers certificates as leaves. A certificate chain is a path from the root to a given leaf.

If a root or sub CA becomes untrustworthy a process known as revocation is triggered.

When installing a proxy (think Burp or ZAP) to intercept TLS traffic, the pentester (or the attacker!?) exports the proxy certificate and installs it as a root CA on the target system. All the TLS certificates will be signed by this new root CA.

Before the new root CA is installed, the browser will display information like this

 2

After the new root CA is installed, it starts signing all the certificates and the browser will report the following

 

In the example above (PortSwigger’s Burp certificate was installed on the target system) this happens because each CA can create any certificates they want, for example they can create a certificate for google.com even if there is already such a certificate from another CA. And the browser will accept these certificates because they trust the root CA.

Now, mobile apps have a particular way of protecting against this.  In a simplified scenario it works something like this: the client makes a connection to the server and the server responds with its TSL certificate. If that certificate was issued by a Certificate Authority that is trusted by the OS, then the connection is allowed. All data sent through this connection is then encrypted with the server’s public key. For an attacker’s perspective, the mobile device would have to trust the attacker’s certificate. Through phishing, physical access or other means an attacker can push a CA certificate on the device and thus be able to perform man in the middle attacks.

Certificate pinning to the rescue

Certificate pinning is making sure the client checks the server’s certificate against a known copy hard-coded in the application of that certificate and not against the OS’s trusted CAs. Simply bundle your server’s certificate inside your application, and make sure any TLS request first validates that the server’s certificate exactly matches the bundle’s certificate. A good article on the technical bits of certificate pinning implementation can be found on OWASP’s web site.

The problem of multiple end-points

A mobile application can connect to multiple backend services. Multiple endpoints means multiple public certificates that need pinning. For a handful it might be manageable but if the number increases it’s advisable to look for another solution. Creating a unique endpoint that acts as a proxy and a load balancer for all the requests might be a feasible solution and would require just one pinned certificate.

As additional supporting material and refresher, I propose the following:

Look too much into the Sun (Tzu) and you will be blinded

You can’t go to a security conference nowadays and not hear at least 700 references to Sun Tzu and his writing, The Art of War. And how important and relevant that book is to the world of Information Security.

But let’s not limit our focus to the InfoSec guys. Life coaches (whatever they are) are abusing the subject with exaggerated comparisons and vague slogans. And the business people, oh, believe me, these are the most creative. Telling you how big of a war is out there and how to deal with it like a boss. I kind of secretly desire for a cooking show to refer to The Art of War and debate how to diminish cucumbers’ morale before chopping and throwing them into the salad. All for a better taste of course, because, you know, cucumbers are the enemies.

I don’t find it particularly amusing to be the one breaking the spell but somebody has to do it.

So, The Art of War is a military treaty from 2500 years ago. One other important aspect you have to consider is that the writing and translation process was complicated to say the least. The origins of the text and author are known only to a certain degree of confidence and the writing went through several translation and reinterpretation cycles. It does outline some generic principles which can be applied in various aspects of life, especially if one has the tendency to generalize. Otherwise it talks about:

  • Using gongs, drums, banners and flags to raise morale (funny enough, some InfoSec companies take this ad-literam)
  • Analyzing weather and terrain conditions. Showing your troops that you packed enough food for the winter. If your rival’s forces are crossing a body of water, don’t meet them in the middle, where you’ll both be bogged down. Instead, wait until half of them have landed and attack while the entire army is divided.
  • How spies must be liberally rewarded and their work highly appreciated.

Again, if one is prone to the confirmation bias and willing to look for far-fetched parallels, he can identify in the above 3 bullets awareness, reconnaissance and intelligence.

For this kind of people I’m willing to make a few recommendations of good readings:

  • Little Red Ridding Hood outlining the necessity for risk analysis. Red should of known better when walking the woods alone.
  • Snow White, which teaches us the need for security assessments. Our heroine could have used one of the dwarfs for QA testing the apple.
  • And finally, my favorite, The Three Little Pigs from which we can learn about the security in depth principle and the need for security architecture.

Next time you go into a meeting and talk about the importance of Information Security, use The Three Little Pigs as your support material (on your own risk).

The Art of War is a good book if read properly and understood in the context in which it was written. China, 2500 years ago. And it’s not the only strategy manual from that region and period, another good read is The Seven Military Classics of Ancient China. The only universal principle coming out of these texts is that you must know yourself, your opponents and the context, and adapt your strategies accordingly.

Short URLs are Harmful for Cloud Data Sharing

I was never a big fan of sharing cloud data through a unique link, rather than nominating the specific people that can access the data. To me it feels like security through obscurity.

It looks something like this:

https://{cloud_storage_provider}/?secret_token={some_unique_token}

All the security of this model relies in the randomness and length of the secret token. But essentially the data is exposed to everyone. Google (Drive) is doing it, Microsoft (OneDrive) is doing it.

Now the really silly part comes in. Because the URL is quite lengthy, a decision was made to use URL shorteners (goo.gl, bit.ly, etc.) to distribute the above mentioned links. Which essentially means that the entropy of secret link is now reduced to just a few characters (around 6 usually).

Martin Georgiev and Vitaly Shmatikov from Cornell Tech did an interesting research on these shortener services to see how much data they can gather, the results were impressive/scary. They were able to trace back Google Maps searches back to individuals and get access to confidential data.

http vs https performance

A while ago I had a huge argument with a development team regarding the usage of https. Their major concern was that the impact on performance would be so big that their servers wouldn’t be able to handle the load.

Their approach was to use https just for the login sequence and plain text communication for everything else. And it was not like they didn’t understand the underplaying problem of sending session cookies over an unencrypted channel, it was just that they thought https is too much for the servers to deal with.

Doing some research back then, I found a paper from the 90s stating that the performance impact was between 10 and 20%. And that only because of the hardware (mainly) CPU available at that time. With the advancement in computational power that should have decreased over time.

And indeed, as of 2010, Gmail switched to using HTTPS for everything by default. Their calculation shows that SSL/TLS accounts for less than 1% of the CPU load, less than 10KB of memory per connection and less than 2% of network overhead. Of course there were some tweaks, but no rocket science involved.

1%, 2%, 10KB. Nothing. I remember somebody saying that 640KB ought to be enough for anyone 🙂 Maybe he knew something. As you can see in the link, Bill Gates didn’t actually say that.

5 more years have passed since then, hardware is more capable, cheaper, so there’s no excuse not to use https.

I’ve seen poor implementations where all http traffic was passed over a secure channel, but not the .js files. Needless to say, a MitM attack can easily modify the .js on the fly and run code in the victim’s browser.

As a closing note, use https for everything, don’t invoke the performance issues, there’s no reason in the current era not to do so.

Is application security an agile process?

No. Judging by the way it is marketed and sold today, application security is not, by any means, agile.

Can it be? Well, Microsoft says so.  When it comes to security, Microsoft changed a lot in the past decade. The development frameworks they offer have built-in security features nowadays. So, if they say security can be built into an agile development methodology, maybe they know something.

Agile

From the old days of development where the waterfall model was the sine qua non, application security developed alongside and followed the same waterfall approach.

Let’s see what are the major interactions between application security and the software development process in a waterfall model approach:

  1. Requirements – AppSec defines non-functional requirements aka security requirements. High level risk and threat analysis are also performed during this phase
  2. Design – secure architecture analysis and finer grain risk analysis
  3. Construction – source code analysis
  4. Testing – penetration testing
  5. Debugging – follow up on the security defects mitigation process
  6. Deployment – retesting if needed
  7. Maintenance – regular retesting

The challenges with an agile methodology, if we are to consider the Agile Manifesto, are multiple. Let’s take it one by one:

  1. Requirements – In an agile environment, changing the requirements is welcomed. While the high level security requirements are the same, specific requirements based on the functionality of the application are needed. New functionality may open new threats so a threat analysis should be performed. Also, each functional requirement should go through a risk analysis process
  2. Design – if the new requirements require a change in the design of the application, a new architecture analysis should be performed to cover the change
  3. Construction – things are no different here compared to the waterfall model, however, because sprints are usually very short ( a few weeks or even less) automation is a must.
  4. Testing – this is usually one of the major concerns, not only doing a penetration test on the changes, but also assessing the overall security implications
  5. Debugging – same as above, however at a much faster pace
  6. Deployment – similar
  7. Maintenance – in an agile environment, periodic retesting becomes crucial

So, what is there to be done to implement application security in an agile environment?

Here are some things to consider:

  • Security training; training the Agile team in respect to information and application security means they are going to take more security conscious decisions
  • Have a full time security expert in the agile team
  • Implement automation in the source code analysis; use a fully integrated solution with the development environment meaning that whenever a piece of code is saved in the repository, this gets scanned and potential security defects are sent to the bug tracking system for triage
  • Implement as much automation as possible in the testing phase; liaise with the QA team and implement security checks during that phase
  • Perform the individual regular activities at certain gates in the process (as opposed to each sprint)

It all boils down to the exact configuration of the development environment and the chosen methodology and processes, but application security can and should be mapped on them with very good results.