The customer is a large retail business with their own network infrastructure including web servers and payment portals which previously handled all of their requirements. Now they have gone to a mobile ordering and payment processing model, with roughly 70% of revenue generated through a mobile application. It’s important to note that the customer experienced a significant jump in sales after switching to a predominately mobile based payment model, and the pressure of becoming PCI compliant was bearing down on the developer team as the number of credit cards processed per year soared passed 1 million. Of course, with that comes a new set of security concerns to deal with. WHITEHACK was contracted to examine the application for weaknesses, and prepare remediation advice in order to prepare the organisation for compliance auditing.
Challenge: Low developer security awareness and a need to address key PCI compliance drivers.
Benefits: The training and support we offered as part of the mobile application assessment sparked a great deal of emphasis amongst the developers around security, with the team now talking about (and understanding) security issues during development iterations and building the need to maintain compliance into their work – with minimal effort!
Over the past several years, this company has identified mobile application security as a key challenge to be addressed. The rising number of data breaches coming through the application layer and the increasing focus of financial regulations being the main drivers for this. The developer team had attempted to drive their own security testing before, using a tool-based approach with little success or meaningful results. Usually due to the sheer amount of data that needed interpreting.
WHITEHACK’s application security experts and project management team after assessing the state of the application, implemented a centralised, security program with a development lifecycle, outlining the specific responsibilities of each developer. This was not only done for the application developers, but deployed across the entire organisation through various methods.
Before running the mobile application itself on a device such as an iPhone or using a disassembler (to examine the app code) simple tools such as “strings” were used to dump interesting human readable information about the app. “Strings” takes a file as input, extracts readable text, and prints it. From that output, we could narrow down searches with pattern matching. Within half an hour, we had enough useful information to identify the format of data exchanged between the server and app (with potential injection or attack points), a list of URLs the app communicated with, where the traffic would be encrypted over the Internet or not, a list of open source code projects hosted on github.com integrated into the app, messages presented to the user (which we could modify to anything within reason), SQL queries, and information that identified the developer of the app; even the username on an Apple Mac machine the developer used to login. We also found there were no checks for a jailbroken device (not that it’s particularly difficult to sidestep those checks in apps in most cases), or checks for whether the app was actively being traced/debugged. These can be very helpful in some situations for gaining intelligence about how attackers are interacting with your business, so that your security threat modelling can reflect this.
Aside from this, we also collected within 30 minutes a limited amount of unencrypted communications between the mobile app and the back-end, so we could easily inspect the data as it passed through the lab machine. If we had been a malicious actor or a competitor gathering information, new product lines, prices, private customer information and payment details would have been compromised as a result.
In the app bundle directory we found a pre-created SQLite database file. The pre-created SQLite3 file revealed some interesting information immediately. Tables indicating where credit card and personally identifying information (PII as defined by Australian Privacy Law Privacy Principles) was to be stored presented an opportunity to modify the app’s code to force the use of an encryption key of our choice to examine what would have been safely encrypted data on the device.
Simple tests verified some initial thoughts: by searching for error messages, we replaced those more likely to be encountered with text of our choice.
For example, one replacement of an error message from the customer’s app with a different message suggested the user confirm their payment information by sending credit card details to an email address, which could have been made to look like an “accounts” email address at the business name.
This simple test also revealed the lack of app integrity checking. While app integrity checking is still an issue being worked on by security research companies, it’s important to take into consideration. What exactly is app integrity? We were able to modify the code to gather information and pass it on to a custom location. There was no mechanism to verify the back-end was communicating with the legitimate mobile application.
It was easily verifiable that the server infrastructure accepted almost any data contained in XML sent from the app. In essence, the server infrastructure had complete faith in not the mobile application itself, just in the data format expected, enabling the creation of simple custom scripts and programs to send information accepted as valid customer requests which (apparently) came from the app. No authentication was necessary. This quick examination of whether the server would accept anything within reason also showed a risk of potentially serious attacks, such as a DoS.
That DoS threat was real and verified. We could have pointed a simple program we wrote as a Proof of Concept (PoC) at the customer’s site which was happily accepting all the data into its back-end database system, flooding it with garbage very quickly, slowing it to a crawl, and potentially using up so many server resources that other critical applications such as the Web server would become unresponsive. Attempting to block the attack could have been futile; banning an IP address is not enough for a dedicated attacker who has resources to attack from a variety of locations. In this case, continuity of business could have been badly affected.
This in turn prompted an investigation into the relationship between the mobile application backend, the web server and the internal network, which was in scope of this assessment, the internal IT team were quite surprised at the level of access we were able to gain simply from the mobile app backend.
Information about “wrapper code” was easily identifiable without any (sophisticated or otherwise) reverse engineering tools, and located quickly on Github and other open source repositories. It’s not particularly unusual to find references to the drive letter and path name to source code repositories and file names in binaries, and in this case we located several further source code file names referenced on the app developer’s desktop, which we could simply paste into Google and search for. This helped put the pieces together in a satisfactory way to remediate the position of not having access to the source code of the app.
Development server locations were also disclosed in the application binary, which were vulnerable to attack. If the business had have gone live with those improperly secured and hijacked development servers, after we had made changes to the development environment, this could have allowed us to compromise a significant number of users sensitive information and destroy their faith in the businesses ability to secure their sensitive information.
What else might be left behind in a mobile application? Some of our inspections revealed enough information to not only identify particular mobile application developers, but to stalk them. A malicious actor may not have the best intentions when viewing Youtube videos taken by a developer in his car, driving near his home, or playing with his children. If the developer was vulnerable for some reason and open to blackmail, for instance, documenting a struggle with mental illness, a malicious actor may attempt to take advantage of such circumstances to gather sensitive information about a business for which he has developed a mobile application. While there is no evidence we are aware of that this type of attack is happening “in the wild,” it’s a pretty serious issue when it comes to the development of applications for sensitive industries and government, and a lesson in due diligence.
Why should you care about encryption? A mobile app that communicates with a back-end server over the Internet wants to protect its traffic. Using SSL/TLS (like https:// connections) is industry-standard now, even if nothing especially sensitive or valuable is being communicated over the Internet. The app will at least be authenticating with usernames and passwords, or tokens. But will it also be sending and receiving sensitive information (does your app deal with medical information, or are you a law firm, for example?)
Encryption is not just about “locking up” or “hiding” information, either: it’s also about integrity of information.
If we can easily see your traffic unencrypted over the Internet and use a “proxy” to modify the traffic in transit (perhaps automatically), then pass it on to the intended recipient, we could be sending an important business transaction down the sink, or using your commercial-in-confidence data to my own advantage, if we were a competitor. We could just be a malicious attacker in it “for laughs”. The point is that caring about encryption technology not only protects information from being seen, the mathematical properties of good ciphers protect information from being tampered with or altered.
This mobile application test report was full of surprises, and many of them came before we even analysed the app with a disassembler or actively debugged the app. It would have taken only a simple tool to dump human readable text from the binary and a “hex editor” to modify machine code bytes to cause potentially serious problems.
Some of these serious and common issues are listed in the Open Web Application Security Project’s (OWASP) top ten list.  The point is, that simply running a command to extract the strings of text from an application binary may reveal more than you intended about how the app works, what databases it may deal with, how they are accessed, and what commands are being sent. If those text strings can be changed, the back-end should be prepared to deal with the potential of SQL Injection attacks. For example, SQL is a query language used to interact with back-end database systems. If a mobile app simply passes on information to a database, an attacker can perform what’s known as an SQL Injection attack. An “injection” could be modification of SQL to allow unauthorised access to data or to run a command to alter the database in some way, or simply to allow exfiltration of all data from the database.
Our readers will note that some of these simple attacks on app binaries require a sophisticated scheme to trick users into downloading modified apps or gaining access to a device itself; although we may well do well to remind ourselves of tens of thousands of unsuspecting users downloading bogus anti-virus software . A pause for thought: is that free password manager app really what it seems? Even if mobile apps do not lend themselves to easy widespread exploitation, mobile app testing has surely earned its place in business, given the amount of sensitive data that is stored with mobile applications, or the amount of data they can generate over the public Internet, and potential for specific, targeted attacks on businesses.