Secure Development Is Dead, Long Live Secure Development

Secure Development Is Dead, Long Live Secure Development

The essay I wrote for De Programmatica Ipsum magazine about secure software development and the mindset gap between the world of product makers and the world of security people. Originally posted at 3 Dec 2018.


Does not it feel like the world is on fire?

Security talks and blog posts usually start with horror stories – for instance “application security is important because without it you will be hacked here and now.” But I have had enough of this doom and gloom on my Twitter feed and mailbox, and would like to talk about something else.

Only those who (happily) live under a rock may have missed the latest news. One company has a critical security bug, another has leaked emails and passwords to millions of user accounts (yours and mine included,) and another will be fined millions of Euros due to a GDPR violation.

I do not want to add to the existential dread, but I would love to discuss the problems behind creating secure software.

Why Do We Care?

Thirty years ago, before the Internet became widespread, only military and governmental services were interested in data protection. Back then it was as complicated to gather and analyze data as it was to steal it. Today more than 1300 new apps emerge in app stores every day; most of them collecting, processing, and transfering data – risking its confidentiality and integrity at each step.

Let’s put on our app developer’s hat.

Risking our customers’ data, those who trust us to handle it with great care, is more than a question of it being “good or bad.” Why? Because of the lawsuits, fines, and other more terrifying consequences that may loom over the business that we build, or are employed by. Competitors or attackers will be thrilled to discover you’ve left them an entryway, too.

Protecting users’ data and privacy is not only a sign of product quality.

It is also a way of showing respect for those users and their rights. One could argue that data security has a “a negative value”, meaning that it requires a lot of effort to implement, it’s hard to measure, it’s never “completely” done, and it doesn’t bring money to the business. However, while “positive security” is very hard to define, the lack of regular “negative” security is something that companies cash out for in terms of fines, which may even lead to the death of a business.

No company is “too small” to think about security.

“Oh, we won’t be hacked because our application is small and not very popular” is a poor argument. I’m sure the owner of a Winnipeg mattress store thought so too – before the company was forced to pay a criminal who shut down their servers, stopping all the sales.

A Bit Of Background

Do not you have a feeling that developers care more about smooth animations than about data protection? I think the reason might be the gap between the world of product makers and the world of security people. The gap in their skills, competence, and mindset.

I came to work in the area of security and cryptography after leaving the shiny world of mobile development. I was working in a “software boutique” company, creating iOS applications, and NodeJS or Python-driven backends. I managed to put my hands on several dozens of mobile apps, like chats, online shops, medical platforms that process patients’ data, apps for controlling smart devices, and so on.

We mostly worked with startups and small companies, and we didn’t have a separate role of a solution architect or product engineer. I had a chance to be responsible for the whole mobile architecture, the protocols, and API layers between apps, web and backend, the data storage and synchronization, backups and monitoring, etc.

Now I am working in a data security company, making software that protects data and prevents leaks, which is designed to be friendlier to developers who are not the from the “security planet”, like my ex-colleagues.

As I often say, “Cryptography should work everywhere," and so our open source libraries and tools can be found in small mobile apps, as well as in large country-wide infrastructures.

Every day I speak with other software companies who care about data security in their products, and I realized that they need help.

From these interactions, I noticed that most security people have no experience in developing decidedly usable software. On the other hand, most software developers don’t have skills in security analysis and architecture. While I’m staying between two worlds, I feel this problem deeply.

Throwing Hot Potatoes

When I ask developers why they don’t implement basic security-sanity features (like protecting user passwords, limiting access to user data, etc.), I often hear the same answer: “The manager didn’t tell us to do it, we don’t have this task on the board.”

Imagine having the following conversations with managers again and again:

– How things are going with security in your app? – We are totally fine, we have smart and competent developers, they handle everything. – Does your team have security-related tasks? Do they assess the security risks while planning new features? Do you follow the Secure Development Life Cycle? Do you have an internal blue team? Do you do security audits once in a while? – Well… We don’t do all of that.. but our QA team does pen-tests!

It’s a great first step when developers try using pen-tests and security checklists.

However, this is a low hanging fruit of “let’s build and release quickly, developers will try to do their best, and a week before a release we will go through the OWASP checklist and solve the obvious problems.” In the best-case scenario, the team will write down and solve the critical issues, but people usually only solve the easy things and put off the complicated ones to be implemented in the next releases – or, more typically, never.

Building secure software is hard.

An attacker has to make just a few correct guesses, while a developer has to take care of a number of things from the very beginning.

Your system is not secure if you do SSL pinning and store keys in key storage in your app, but use an open MongoDB with default admin password on the backend. Data security works if it covers the system fully, including mobile and web applications, backends, external services, and backups. Solution architects and security team should care and align feature developers and devops.

This approach is efficient when a company is not hiding under “we’re fine” umbrella.

Secure Development

It is impossible to learn secure development just from reading tutorials.

Most “Building secure chat” tutorials starts with “Let’s create a new project” and end up with the “Now we have encrypted data” step. Moreover, they use encryption libraries with APIs expecting developers to choose between symmetric cyphers, their mode (ECB vs GCM, huh?), salt and “nonce”, etc.

Most developers just copy-paste cryptographic code snippets without understanding them. Even if data encryption is done correctly, one needs to design the rest of the application flow, use a different encryption for transferring the data, consider the key management procedures, choose appropriate authentication controls, techniques of monitoring and alerting, and so on.

As developers, we are always trying to cut a few corners, are not we?

I’ve recently discovered a “codeless” service that “protects” mobile applications you send it to them. So, imagine, you send a compiled .ipa or .apk and code signing credentials to their site, and they “magically” protect your app by integrating encryption, SSL pinning, DLP, and obfuscation. Surprisingly, neither managers nor developers are embarrassed. Close-source system that changes your application flow without providing you with a way to see changes, what could possibly go wrong?

The illusion of security is much worse than its absence.

Secure Software Development Lifecycle as a methodology that has existed for many years. It is described in detail as MS SDL (deep and solid) and as OWASP S-SDLC (short and modern). The SSDLC distils common sense from the industry experience of building secure software and prescribes techniques which cover most risks in most cases.

The SSDLC consists of several steps and accompanies a typical software development approach (including Agile and XP.)

  • Risk evaluation and assessment – understanding business and technological risks threatening data of and on which the application operates.
  • Building a threat model – what are the typical threats that the application faces?
  • Security roadmap defined according to the most possible threats. Typically, a security roadmap contains data minimization, data protection during storage and transmission, access limitation and monitoring.
  • Secure coding – which libraries and tools to use, where to store keys. Using a good encryption library itself won’t make your app secure.
  • Secure operations – infrastructure-level changes that should be done, including procedures of revoking user sessions, updating certificates, patching libraries.
  • Security verification and testing are usually done combining manual and automation tools for continuously testing a code base, including its dependencies, for vulnerabilities.
  • Threat response and recovery – the plan and procedures to conduct upon detecting threats.
  • The SSDLC is a continuous process that should start after suspecting any vulnerability or detecting an incident. And it’s a process, not a single feature to add.

End-to-end Encryption And Marketing

How technically difficult you think it is to make end-to-end encrypted applications? Well, it’s a bit tricky, but not impossible. Cryptography helps to reduce the attack surface and risks and prevents attackers and insiders from reading the data. If a system does not know which data it operates on, this data cannot be stolen easily, right?

So why do not we have more E2EE data exchange in modern software? If a company does not have access to its data, it cannot analyze it and use it for advertising. Finding the right balance between missed advertising profits and de-risking actual ownership of this data is tricky, but doable.

Tomorrow

Secure software development is far from being a popular practice, so we still need reminders to show us the real consequences of data security. Until good software and secure software become synonymous, there will be fines and losses. So, what exactly should we do tomorrow to improve the security of our software?

I don’t have an easy answer for you. Maybe starting to devote time to security as a sign of professionalism – similarly to the way we focus on writing maintainable, testable, manageable code, not just code “that works”?

Cover photo by Nick Boyer on Unsplash.

2021 update

In a nutshell, everything is still the same: bugs everywhere, many companies treat SSDLC like “automatic tools and pentests we run after release, and awareness trainings” :)