Friday, 30 March 2012

Seven Web Server HTTP Headers that Improve Web Application Security for Free

We deal with clients and teams of all maturity levels here at Recx. They range from those who understand risk and want deep level expertise to those who are just realising they need security (typically after an incident) and need to get to grips with the basics.

As it's Friday we're going to take a step back from the low level and focus on some web basics. We've had our Chrome extension that checks for web server headers that improve security available since August 2011. This is a tool really designed for anyone. Be they a developer, quality assurance tester or security professional . It's designed to make sure we're all getting the basics right. Today we're going to look at the web server headers it checks for.

What follows is an explanation of each of the headers we check for, the text is mainly lifted from the Chrome extension if you get weird sense of deja vu.

X-Content-Type-Options
The 'X-Content-Type-Options' HTTP header if set to 'nosniff' stops the browser from guessing the MIME type of a file via content sniffing. Without this option set there is a potential increased risk of cross-site scripting

Secure configuration: Server returns the 'X-Content-Type-Options' HTTP header set to 'nosniff'.

X-XSS-Protection
The 'X-XSS-Protection' HTTP header is used by Internet Explorer version 8 and higher. Setting this HTTP header will instruct Internet Explorer to enable its inbuilt anti-cross-site scripting filter. If enabled, but without 'mode=block' then there is an increased risk that otherwise non exploitable cross-site scripting vulnerabilities may potentially become exploitable. 

Secure configuration: Server returns the 'X-XSS-Protection' HTTP header set to '1; mode=block'.

X-Frame-Options
The 'X-Frame-Options' HTTP header can be used to indicate whether or not a browser should be allowed to render a page within a <frame> or <iframe>. The valid options are DENY, to deny allowing the page to exist in a frame or SAMEORIGIN to allow framing but only from the originating host. Without this option set the site is at a higher risk of click-jacking unless application level mitigations exist. 

Secure configuration: Server returns the 'X-Frame-Options' HTTP header set to 'DENY' or 'SAMEORIGIN'.

Cache-Control
The 'Cache-Control' response header controls how pages can be cached either by proxies or the user's browser. Using this response header can provide enhanced privacy by not caching sensitive pages in the users local cache at the potential cost of performance. To stop pages from being cached the server sets a cache control by returning the 'Cache-Control' HTTP header set to 'no-store'.

Secure configuration: Either the server sets a cache control by returning the 'Cache-Control' HTTP header set to 'no-store, no-cache' or each page sets their own via the 'meta' tag for secure connections.

Updated: The above was updated after our friend Mark got in-touch. Originally we had said no-store was sufficient. But as with all things web related it appears Internet Explorer and Firefox work slightly differently (so everyone ensure you thank Mark!).


X-Content-Security-Policy
The 'X-Content-Security-Policy' response header is a powerful mechanism for controlling which sites certain content types can be loaded from. Using this response header can provide defence in depth from content injection attacks. However it's not for the faint hearted in our opinion.

Secure configuration: Either the server sets a content security policy by returning the 'X-Content-Security-Policy' HTTP header or each page sets their own via the 'meta' tag

Strict-Transport-Security
The 'HTTP Strict Transport Security' (Strict-Transport-Security) HTTP header is used to control if the browser is allowed to only access a site over a secure connection and how long to remember the server response for thus forcing continued usage.

Note: This is a draft standard which only Firefox and Chrome support. But it is supported by sites such as PayPal. This header can only be set and honoured by web browsers over a trusted secure connection. 


Secure configuration: Return the 'Strict-Transport-Security' header with an appropriate timeout over an secure connection.

Access-Control-Allow-Origin
The 'Access Control Allow Origin' HTTP header is used to control which sites are allowed to bypass same origin policies and send cross-origin requests. This allows cross origin access without web application developers having to write mini proxies into their apps.

Note: This is a draft standard which only Firefox and Chrome support, it is also advocarted by sites such as http://enable-cors.org/.

Secure configuration: Either do not set or return the 'Access-Control-Allow-Origin' header restricting it to only a trusted set of sites.

Fin.
Not a header, but the end of this post. We hope you've found this introduction useful. As for how to set the headers in your application, application server, embedded appliance or web server that is an exercise for the reader / the subject of another blog post or five we suspect.

For those buying 'web based' products, services or appliances and finding these headers not being set it can in our experience be indicative that the vendor or service provider isn't really up on web security and is probably worth a closer look.

Thursday, 29 March 2012

SDLs Identify Security Debt - The Need for Risk Management and Cost Consideration

We published a free / paid for (your choice) eBooklet on software security debt earlier this month. We've previously posted another extract from the paper on 'The Value in Measuring Security Debt'. What follows is an another extract (1 of 20 pages) discussing the correlation between SDLs and security debt discovery and the general rise in debt through the lifetime of a project. 


Secure Development Life Cycles Identify Debt
It’s important to understand the relationship between an SDL and security debt. The precursor to an SDL is security mindfulness. Security mindfulness is where a formal SDL may not be deployed throughout the organisation, but assurance processes or security related activities do occur at the different phases of development or testing. These activities will then likely mature into a full SDL.

When adopting an SDL the benefits of identifying vulnerabilities earlier in the lifecycle will be seen for new development. However, when SDL or security mindfulness activities are applied to both new and old development there will prolonged periods of implementation debt discovery.

As these activities increase, the likelihood is that the volume of issues found in software will quickly start to outpace the resources available to resolve them on a per release or per product basis. The reason for the acceleration in the discovery of security issues can be numerous, however, likely drivers include:

  • Increased manual code coverage.
  • Increased use of static code analysis.
  • Increased use of automated security testing (fuzzing).
  • Development and testing team knowledge and awareness of security issues enabling identification.
  • Root cause analysis and variation identification based on publicly disclosed flaws.


As a result of this increase in the volume of issues and the associated resource constraints, organisations tend to focus only on the most severe issues. Over time, a mountain of security debt starts to grow fuelled by the volume of lower impact issues. However, while individual issues may be risk rated at a certain severity level, the same is not true for combinations of issues. That is to say, a number of distinct lower impact issues when combined or chained together, can carry equal impact to a single higher rated issue. While the complexity related to discovery and exploitation is greater, the ultimate impact can be the same. SDLs today do not adequately deal with this scenario of aggregating lower severity issues to understand impact.

The Rise of Security Debt
Whilst it’s tempting to think that the risk of security debt is not significantly different from that of technical debt  (we cover technical debt in the paper) there are important differences to consider. These differences stem from the fact that the impact on both vendor and users if this debt is discovered and exercised compared to technical debt is typically greater.

As the challenges of software security have become more widely understood, methodologies to identify and address these challenges have been developed (similar methodologies have also been developed to address software quality). The processes and procedures to improve software security typically manifest themselves as an SDL in one guise or another. While an SDL is a useful set of methodologies and processes for identifying, resolving or mitigating security exposures within software development, they are not without small print. 

The reality is that SDLs are variable in their application, coverage and cost, coupled with the challenge of actually addressing the issues once identified. At every stage of an SDL when an issue is discovered there is a risk, cost, time and benefit analysis for that version of the software product. The generally accepted wisdom is that identifying, mitigating or resolving a security weakness earlier in the life cycle is cheaper, is in Recx’s opinion valid. However attempting to do so is not without any associated cost. This fact is sometimes lost in the SDL rhetoric and needs to be kept in mind.

So that's it for this extract, if you're interested in read more such as the types of debt events we encourage you to read the paper.

Tuesday, 27 March 2012

Windows 8 App Container Security Notes - Part 1

Windows 8 is coming and with it Metro and a revised application sand-boxing model. We plan to do a series of posts over the next few months summarising what we discover about AppContainer or App Container (Microsoft seem to flick between the two terms). This introductory post is really a summary of the information we've gleaned from the Internet and SDK into a single coherent resource.

Note: As this is all subject to change prior to release what is true today may not be accurate tomorrow. All of Microsoft's documentation is stamped '[This documentation is preliminary and is subject to change.]'. The same should be said for this post as well.

What is App Container? 

App Container was introduced formally via the recent post by Microsoft titled 'Understanding Enhanced Protected Mode'. In this post Microsoft states:
"Windows 8 introduces a new security sandbox, called AppContainer, that offers more fine-grained security permissions and which blocks Write and Read Access to most of the system. There’s not a lot of documentation specifically about AppContainer because all Metro-style applications run in AppContainers, so most of the documentation is written from that point of view. For instance, here’s a page that describes the capabilities that a Metro-style application can declare that it needs: http://msdn.microsoft.com/en-us/library/windows/apps/hh464936.aspx. Under the covers, it’s the AppContainer that helps ensure that an App does not have access to capabilities that it hasn’t declared and been granted by the user."
Distilling this down they've introduced a high-level capability model that translates to a more restricted version of low integrity processes. Going further than before in restricting IPC between processes, file access and event loopback network access.


Example Constraints with the new Capabilities

A Japanese blog in October last year documented the constraints from early Microsoft documentation. These are:
  • By default, an app can access only its AppData folder (including local, roaming, and temp sub-folders, all of which are deleted when a user uninstalls an app).
  • To directly access anything else through APIs, such as media libraries or documents, an app must declare that intent in its app manifest or a user must grant access by explicit action. Otherwise the APIs used to access the file system will fail.
  • Access to devices and sensors, especially webcams, microphones, and geolocation sensors, are also protected. Unless the app declares its intent, which is visible to users within the Store, the app will be denied access to those resources at runtime.

Integrity Levels and App Container

App Container is actually its own integrity level. This can be seen in the screen shot below:

Click for larger image

Mozilla have done an excellent job at documenting some of the new broker processes:
"The other child processes of svchost are the RuntimeBroker which is in charge of accessing privileged data or devices on behalf of regular metro apps and wkspbroker which is the "Remote app and desktop connection runtime broker". To sum up, svchost and wkspbroker.exe are not metro apps but metro support infrastructure." 
The Mozilla team also made this observation:
"Another difference is that named kernel objects of an AppContainer process are in a different namespace. For example, in this case the regular 'interactive user' session is session 3 so a regular named object 'Foo' from a traditional desktop application will be "\Sessions\3\BaseNamedObjects\Foo" which is what we see for IE10, while for metro apps it would be:
"\Sessions\3\AppContainerNamedObjects\S-1-15-2-wwwwwwww-xxxxxxxx-yyyyyyyy-zzzzzzzz\Foo 
Were w,x,y,z are are part of a unique SID which is neither the interactive user SID or the user logon SID. In fact, it seems to be some kind of per-application id. " 
This enumeration is possible via the function  GetAppContainerNamedObjectPath.

Identifying an App Container Compatible Binary

So there appear to be two key ways to identify an App Container compatible binary. 

The first is through the DLL Characteristics in the PE header (note: Microsoft haven't issued updated PE COFF specifications to include this yet). This characteristic is set via the Microsoft linker using /APPCONTAINER:[NO] (Visual Studio 2011 and above).

How can we verify this? Well if we look inside C:\Program Files (x86)\Windows Kits\8.0\App Certification Kit\AppContainerCheck.dll we see.

ImageOptionalHeader optionalHeader = item.PE.OptionalHeader;
  if (optionalHeader != null)
  {
   ushort num = (ushort)optionalHeader.GetField(ImageOptionalHeader.Fields.DllCharacteristics);
   if ((num & 4096) != 0)
   {
    checkResult = CheckResult.Passed;
    additionalInfo.Add(Resources.Information, Resources.ImageAppContainer);
   }
  }

The second way to identify App Container applications is through their manifest. The Visual Studio 2011 GUI provides a break down of the capabilities (more on these later) and allows them to be modified. What this translates too is something like:


                
    

SIDs and Tokens

Windows is a world of SIDs and Tokens. So it should come as no surprise that both have been used to support the App Container model.

New SID Constants

Microsoft have defined two new sets of SID constants to support App Container. The first of these are App Container SID Constants

The second set are the Capability SID Constants. These define if the resulting SID will have the capabilities such as being an Internet Client, Server (or both), access to Pictures, Music, Documents, Shared Certificates or Removable Storage.

App Container Tokens

Microsoft Windows is where the token is king. Some tweaks have occurred to tokens in order to support App Containers. 

At a high-level the AuthzGetInformationFromContext function can be used with a AUTHZ_CONTEXT_INFORMATION_CLASS enumeration with an enum of AuthzContextInfoAppContainerSid to retrieve a TOKEN_APPCONTAINER_INFORMATION  structure. There is also a setter function by the name of AuthzSetAppContainerInformation.

A token verification function capable of handling App Containers is CheckTokenMembershipEx It allows the caller to specify CTMF_INCLUDE_APPCONTAINER, which according to Microsoft will:
"allows app containers to pass the call as long as the other requirements of the token are met, such as the group specified is present and enabled"

These are the changes in WinNT.h that reflect the above compared to the Windows 7a SDK:
//
// Application Package Authority.
//

#define SECURITY_APP_PACKAGE_AUTHORITY              {0,0,0,0,0,15}

#define SECURITY_APP_PACKAGE_BASE_RID               (0x00000002L)
#define SECURITY_BUILTIN_APP_PACKAGE_RID_COUNT      (2L)
#define SECURITY_APP_PACKAGE_RID_COUNT              (8L)
#define SECURITY_CAPABILITY_BASE_RID                (0x00000003L)
#define SECURITY_BUILTIN_CAPABILITY_RID_COUNT       (2L)
#define SECURITY_CAPABILITY_RID_COUNT               (5L)

//
// Built-in Packages.
//

#define SECURITY_BUILTIN_PACKAGE_ANY_PACKAGE        (0x00000001L)

//
// Built-in Capabilities.
//

#define SECURITY_CAPABILITY_INTERNET_CLIENT                     (0x00000001L)
#define SECURITY_CAPABILITY_INTERNET_CLIENT_SERVER              (0x00000002L)
#define SECURITY_CAPABILITY_PRIVATE_NETWORK_CLIENT_SERVER       (0x00000003L)
#define SECURITY_CAPABILITY_PICTURES_LIBRARY                    (0x00000004L)
#define SECURITY_CAPABILITY_VIDEOS_LIBRARY                      (0x00000005L)
#define SECURITY_CAPABILITY_MUSIC_LIBRARY                       (0x00000006L)
#define SECURITY_CAPABILITY_DOCUMENTS_LIBRARY                   (0x00000007L)
#define SECURITY_CAPABILITY_ENTERPRISE_AUTHENTICATION           (0x00000008L)
#define SECURITY_CAPABILITY_SHARED_USER_CERTIFICATES            (0x00000009L)
#define SECURITY_CAPABILITY_REMOVABLE_STORAGE                   (0x0000000AL)

#define SECURITY_CAPABILITY_INTERNET_EXPLORER                   (0x00001000L)

The shared user certificates capabilities and the associated changes with storage are explained here.

Networking

Several networking changes to support isolation.

Firewall Changes for App Container

Microsoft has introduced some new structures to the in-built Firewall to support App Containers. 

Network Isolation

Windows 8 introduces the concept of network isolation (.docx). This brings with it a raft of new functions, we wont list them all but the interesting ones in our mind (for attack surface enumeration) are:
The network isolation features rely the new capability SID constants to allow an application to be:
  • Internet client
  • Internet Server
  • Private network client
If you need to grant an exemption to an application to allow it to speak over loop back then the network isolation document / Fiddler blog post shows how it can be done:

To Make Your App Exempt by Package ID (SID)

%windir%\system32\CheckNetIsolation.exe LoopbackExempt –a –p=S-1-15-2-4125766819-3228448775-2449327860-2490758337-1264241865-3581724871-2122349299 
To Make Your App Exempt by Appcontainer Name

%windir%\system32\CheckNetIsolation.exe LoopbackExempt –a –n=stocks_mw26f2swbd5nr 
To Remove the Exemption for a Specific App by Package ID

%windir%\system32\CheckNetIsolation.exe LoopbackExempt –d –p=S-1-15-2-4125766819-3228448775-2449327860-2490758337-1264241865-3581724871-2122349299 
To Remove All Exemptions (For All Apps)

%windir%\system32\CheckNetIsolation.exe LoopbackExempt –c 
To See All Apps that are LoopbackExempt

%windir%\system32\CheckNetIsolation.exe LoopbackExempt –s

Windows Filtering Platform

The Windows Filtering Platform is described by Microsoft as:
"Windows Filtering Platform (WFP) is a set of API and system services that provide a platform for creating network filtering applications. The WFP API allows developers to write code that interacts with the packet processing that takes place at several layers in the networking stack of the operating system."
This too has seen some support added for App Containers. There are now two additional filtering condition flags:
  • FWP_CONDITION_FLAG_IS_APPCONTAINER_LOOPBACK - Tests if the network traffic is AppContainer loopback traffic.
  • FWP_CONDITION_FLAG_IS_NON_APPCONTAINER_LOOPBACK - Tests if the network traffic is non-AppContainer loopback traffic.


App Container Profiles

Microsoft have created two support functions for the creation and deletion of App Container profiles. These are:

These functions are exported by UserEnv.dll (not present in Windows 7). It should be noted that a quick hunt through IDA shows these are just stub functions for CreateAppContainerProfileWorker and DeleteAppContainerProfileWorker in ext-ms-win-profile-userenv-l1-1-0.dll.

Some other interesting enumeration functions, again for attack surface analysis, are (the names are pretty self explanatory):
Anyway that's it for the first post, we hope you find it useful...

Further Reading / Links

Thursday, 22 March 2012

Securing Oracle Apex - ApexSec 2.1 Released


We would like to announce the next major release of our Apex security analyser ApexSec 2.1, this incorporates many new detection routines for the new issues we have been researching over the past months.

More detection routines for Cross-Site Scripting and SQL Injection and reduction of 'false-positives'.

Compatible with Linux, Windows and Mac OSX.

Works with all versions of Apex, either via export files or direct database connection.


Enhancements to the built in Apex browser means that issues can be fixed quickly and easily. ApexSec will keep the Apex browser in sync while you navigate to the issue, edit your Apex application within the Apex Browser.

Full explanations of vulnerabilities, and complete highlighted display of important issues.

Reports can be output in several formats.


New type detection means that numeric items, synonyms and views are analysed within vulnerabilities.

Tool-tips to quickly identify item settings. And navigation aids to solve vulnerable code.



Package processing to detect vulnerabilities inside PL/SQL Apex application code;


ApexSec will read the install scripts from an exported application to derive types, procedures, synonyms and any other information to increase detection accuracy.






Many fixes, enhancements and streamlining of the interface.

Integration with JUnit compatible build processes such as Hudson.





We are committed to continuing and maintaining our on-line testing service;

  • Free summary scans
  • Free HTML on-line report for Applications up to 15 pages
  • Full free on-line scan for open source projects
  • Full free on-line scan for registered charities

Search for 'oracle apex security', visit our main ApexSec web page or alternatively contact us for more details.


Sunday, 18 March 2012

Software Security Debt - The Value in Measuring Security Debt

We published a free / paid for (your choice) eBooklet on software security debt earlier this month. What follows is an extract (2 of 20 pages) introducing some of the concepts of software security debt and the value of measurement....

Software Security Debt
All software no matter how simple, is likely to carry a degree of security debt. As software complexity increases the likelihood of incurring security debt also increases. This relationship between development and security debt is analogous to development and bugs. The reason the two are analogous is due to the simple fact that some security defects are a type of software bug. 

Debt should not necessarily be considered negatively, as an artefact of an economic development processes, the accrual of debt, security or otherwise is normal and good business. It is not likely that software will ever reach a debt free utopia. The knowledge of security debt is an important input into the risk profile and overall understanding of a product’s or organisation’s exposure.

The objective of having a security debt management process should be a business and development goal for software organisations. If the development and testing teams of an organisation say there isn't any debt; yet cannot demonstrate the steps taken to control it, then it’s likely that a significant amount exists. If proactive steps are not taken, then the accrued security debt can in time become toxic. Debt that has become toxic can then require far greater level of investment to resolve, under externally dictated time-lines and typically under forced repayment conditions.

The Value in Measuring Security Debt
No organisation can avoid the presence of security debt. On this basis, no responsible security and risk aware organisation can reasonably ignore its presence. As security debt cannot be avoided, it should be seen as another input to the risk management processes. Any attempt to avoid understanding or measuring the level of debt, is akin to not wishing to understand the risk exposure. This lack of visibility and understanding will result in a lack of an effective approach to mitigate or remediate the identified risks.

The value in measuring the level of software security debt is in the broader understanding of risk exposure it provides. This broader risk picture can then facilitate the understanding of:
  • The business exposure to the risk from a security incident (forced repayment event).
  • The speed and rate of payback of issues of differing severity.


Once the level of debt is understood it also facilitates strategic planning and metrics.
For example:
  • The potential for future expenditure to address known defects over the short to medium term.
  • Debt trends and the gauging of the return-on-investment from the SDLC process.
  • Understanding the percentage of the debt that is compromised of the OWASP top 10 or similar, facilitating additional prioritisation.
  • Identification of those security issues that can be linked together to achieve a higher level of impact.

Secure Development Life Cycles Identify Debt
It’s important to understand the relationship between an SDLC and security debt. The precursor to an SDLC is security mindfulness. Security mindfulness is where a formal SDLC may not be deployed throughout the organisation, but assurance processes or security related activities do occur at the different phases of development or testing. These activities will then likely mature into a full SDLC.

When adopting an SDLC the benefits of identifying vulnerabilities earlier in the lifecycle will be seen for new development. However, when SDLC or security mindfulness activities are applied to both new and old development there will be prolonged periods of implementation debt discovery.

As these activities increase, the likelihood is that the volume of issues found in software will quickly start to out pace the resources available to resolve them on a per release or per product basis. The reason for the acceleration in the discovery of security issues can be numerous, however, likely drivers include:
  • Increased manual code coverage.
  • Increased use of static code analysis.
  • Increased use of automated security testing (fuzzing).
  • Development and testing team knowledge and awareness of security issues enabling identification.
  • Root cause analysis and variation identification based on publicly disclosed flaws.

As a result of this increase in the volume of issues and the associated resource constraints, organisations tend to focus only on the most severe issues. Over time, a mountain of security debt starts to grow fuelled by the volume of lower impact issues. However, while individual issues may be rated at a certain severity level, the same is not true for combinations of issues. That is to say, a number of distinct lower impact issues when combined or chained together, can carry equal impact to a single higher rated issue. While the complexity related to discovery and exploitation is greater, the ultimate impact can be the same. SDLCs today do not adequately deal with this scenario of aggregating lower severity issues to understand impact.

If you're interested in reading more feel free to download the paper.

Friday, 16 March 2012

Windows Low Integrity Processes and Recycle Bin Metadata

Back in January we released a small tool to enumerate low integrity accessible items on Windows. During the course of releasing it we obviously ran it. As a result we identified a funny little issue (not security earth shattering) where low integrity processes have the ability to write to recycle bin metadata.

The impact of this vulnerability is that a low integrity process can modify where a file will be restored to when taken back out of the recycle bin.

When a file is deleted meta data gets created in a directory such as:
C:\\$Recycle.Bin\S-1-5-21-3594361658-2603294332-2943340413-1001

In this directory there will be a meta data file e.g. '$IOM7SPO.txt'.

This file is accessible to low integrity processes:

C:\>GetLowIntegrityLevelObjects.exe
[*] Low integrity accessible - (c)2012 Recx Ltd
[*] http://www.recx.co.uk
[i] Low accessible directory    C:\\$Recycle.Bin
[i] Low accessible directory
C:\\$Recycle.Bin\S-1-5-21-3594361658-2603294332-2943340413-1001
[i] Low accessible file
C:\\$Recycle.Bin\S-1-5-21-3594361658-2603294332-2943340413-1001\$IOM7SPO.txt
[i] Low accessible file
C:\\$Recycle.Bin\S-1-5-21-3594361658-2603294332-2943340413-1001\desktop.ini

This can also be verified with accessck:
C:\>accesschk.exe -w -e
C:\\$Recycle.Bin\S-1-5-21-3594361658-2603294332-2943340413-1001\$IOM7SPO.txt
Accesschk v5.02 - Reports effective permissions for securable objects
Copyright (C) 2006-2011 Mark Russinovich
Sysinternals - www.sysinternals.com
C:\$Recycle.Bin\S-1-5-21-3594361658-2603294332-2943340413-1001\$IOM7SPO.txt
  Low Mandatory Level [No-Write-Up]
  RW BUILTIN\Administrators
  RW NT AUTHORITY\SYSTEM
  RW ollierecx\ollie
This meta data file contains the path that the file will restored to. So when the user manually restores the file it is not placed in the original location but an alternate attacker controlled one. This alternate location in our particular demo was a UNC path (lame data ex-filtration as you could copy it anyway).

While the location is reported in the bottom of the toolbar the user my not in our humble opinion pay close attention to it.

We reported the issue to Microsoft who rightly responded with:
"Even though, the user can restore the file to a location other than where the file originally resided,  we were not successful in elevating privileges.  As a result, this issue does not hit the bar to be released as a security bulletin."
We thought about this and only came up with contrived routes such as:
  • You download an executable file from an untrusted source
  • Delete it
  • Then restore it, at which point it restores to a less ideal location such as the start-up folder
These steps are like asking a user to run cmd.exe as Administrator to add a user to the admin group and send you the password... so in short, another lesson in security research failure.

Anyway that's it... till next time...

Wednesday, 7 March 2012

Securing Oracle Apex - Allow Rich Text Editing

 We recently received an interesting correspondence:

I have a requirement to allow rich text editors for content that will be printed to other pages. I'm looking for something like OWASP AntiSamy or HTML Purifier that could be used in the PL/SQL to sanitize the input and thought maybe you would know where to look.


Thanks,

Greg
Using a known library like Antisamy is generally a good idea for several reasons;
  • Don't re-invent the wheel badly.
  • OWASP probably know a bit more about cross-site scripting than us.
  • Issues can be fixed centrally so everybody benefits.

So we decided to gather the relevant java libraries together and put together an Oracle package to leverage this excellent resource for Apex developers who want to display dynamic HTML marked-up content but significantly reduce the risk of cross-site scripting attacks.

First we created a very simple Apex application to test the vulnerability

This consists of two regions; one which contains the Apex rich-text editor and the other a PL/SQL region to output the results.



Click for larger version


As can be seen this works well, the user has turned the text green and this is correctly displayed in the Output region.

If we analyse the application with our ApexSec security analyser, we can see that there is a problem;




Click for larger version

ApexSec has identified both the cross-site scripting vulnerability and the item causing it, in this case the :P1_INPUT item.

We can quickly test the vulnerability by using the source button on the rich text editor.



Click for larger version

The source button allows us to type the HTML in as raw data, we raise a simple alert box this time (for more interesting exploits read our other blogs). Clicking the submit button leads to the predictable alert box;
Click for larger version





What is needed is a way to safely keep the tags that define the style but filter out the malicious tags that may lead to a cross-site scripting attack.







Installing the library
We install the Java library and wrapper into our schema in the 'developer days' Image (OBE);

$ loadjava -resolve -genmissing -user obe/obe Antisamy.jar

We install the PL/SQL call specifications for the installed Java library;

$ sqlplus obe/obe @recx_antisamy.sql

Function created.
Procedure created.


Calling the new library in the PL/SQL region is as simple as calling recx_antisamy_scan(stringToSanitise) function. When we re-scan the project using ApexSec we can see that there in no longer a cross-site scripting issue detected.

Click for larger version
By default the library uses the antisamy-tinymce-1.4.4.xml policy (the most restrictive - doesn't do colour) as shown above this can be changed to a more relaxed policy with the recx_antisamy_policy function, a full list of the installed policies are here.

We run a simple test again adding a script tag, this time the tag has been filtered by the antisamy library, but we have kept the formatting.

Click for larger version

Recx perform security audits of Apex code, as well as advising about secure Apex coding techniques. Contact us for information on how we can help you secure your Apex estate.

Our ApexSec security console is the only tool to do deep analysis on Apex code, highlighting cross-site scripting, SQL Injection, configuration and insecure coding constructs.

Downloads

Example Apex Application
Antisamy Java Library
PL/SQL Call Specifications 

Thanks to Greg, for throwing down the gauntlet. For a copy of the eclipse project feel free to email us.

Disclaimer: THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

Tuesday, 6 March 2012

eBooklet - Software Security Debt in Modern Software Development

So we've just released our inaugural eBooklet titled Software Security Austerity - Software security debt in modern software development.

It's a 7,500+ word look at the concepts of security debt and software security austerity. It provides both an introduction as well as real-world strategies to effectively manage security debt within the software development process.

Paper Abstract

The concept of technical debt is not a new one. Technical debt has historically referred to the trade-off between getting a solution or system to market versus a perfectively designed and a bug free product. In the process of trading perfection for an economically viable development model the software incurs a degree of debt.

The debt analogy is starting to be applied to systems and software security. Recx have previously discussed some of the trade-offs made with regard to software security and time-to-market in our article entitled: Breaking the Inevitable Niche/Vertical Technology Security Vulnerability Lifecycle.

It is important to recognise that a Secure Development Life Cycle (SDLC) does not stop the presence or accumulation of security debt. A maturing SDLC allows an organisation to identify weaknesses and thus convert a larger volume of previously unknown debt to known security debt. The security debt once known, then needs robust processes to both service and repay it over a period of time. Typically SDLCs only set the criteria for the issues that must be fixed over a certain impact level. As organisations get better and more efficient at identifying security issues they typically start to accrue a substantial number of lower rated issues in the process. While these issues may on their own have less of an impact, when several lower issues are combined they can be as impactful as higher rated issues. As a result, the accumulation of a large number of lower impact issues without any strategy on how to resolve them can be equally as risky to security.

In this white-paper, Recx first introduces the reader to the concept and risk of software security debt. A review is then performed of the types and sources of debt before discussing how it can build up when using a risk assessment based approach to prioritisation. A number of debt management strategies are then presented along with associated events, such as servicing, repayment, overhang and expiry. Finally a number of conclusions are drawn around software security debt and why it needs to be considered as part of mature secure software development and risk management processes.

Paper Availability

We're making paper available in a number of formats. We've published it for the Kindle and it's available to buy for £2.05 (GBP) from the Amazon store.

Amazon Logo

Amazon Logo

In addition, we've created an ePub version of the paper and made that available for the same price from our Google checkout.

We're also making the paper available for download direct from us as a PDF for free. The reason for selling it and giving it away? Well, we think the information is valuable and is worth the money, but we also want you to decide if you think it's worth paying less than a pint of beer or about 20% of a single ACM portal download for.

Why No iTunes / Barnes and Noble Nook Stores?

When we started down this road we wanted to have it available on all stores. However we couldn't do this, the reasons are:
  • Apple - you need an ISBN, which we didn't think was appropriate for a booklet. 
  • Nook - they currently only accept USA based publishers.
To address this gap we made the ePub available for purchase.

Friday, 2 March 2012

A Partial Technique Against ASLR - Multiple O/Ss

Overview

With the advent of Address Space Layout Randomization (ASLR), trying to find new techniques that can weaken its effectiveness is a constant game of cat and mouse. Historically successful attacks against full ASLR implementations have involved either:
  • Flaws in the implementation resulting in address bias.
  • A secondary memory revelation vulnerability leaking the addresses that are randomized by ASLR.
  • Partial overwrites leading to the ability to do relative addressing
We're happy to reveal a small bit of internal research which shows a potential technique, at least in the lab, which works across multiple operating systems. 

For this technique to be useful, you will need:
  • A 32bit ASLR enabled binary in which all libraries are also randomized.
  • The ability to cause excessive memory allocations either via normal operation or a bug.
  • The ability to cause a dynamically linked (Windows) or shared (Linux) library to load at your time of choosing.
  • Minimal other activity in the application - although some can be tolerated.
  • Sufficient RAM or swap to allocate a majority of user space for the target process (64bit operating systems can help here due to typically increased available physical RAM).
We recognize this is quite a list where 'the moon on a stick' wouldn't be out of place. As a result we think our findings fall in the interesting but contrived bucket.

Research

To show how we discovered this particular corner case and what it is, take the following Windows test case:

#include "stdafx.h"   
  #include <Windows.h>   
  int _tmain(int argc, _TCHAR* argv[])   
  {   
       bool bShown=false;   
       int intCount=0;   
       LPVOID intAddress=0;   
       LPVOID intLastAddress=0;   
       LPVOID lpAddress=0;   
       while(1){   
            intAddress=HeapAlloc( GetProcessHeap(),NULL,3096);   
            if(intAddress == NULL){   
                 fprintf(stdout,"0x%08x - %d (%d)\n",intLastAddress,GetLastError(),intCount);   
                 return 0;   
            }   
            if(!bShown){   
                 fprintf(stdout,"0x%08x 0x%08x 0x%08x\n",intAddress,&bShown,&_tmain);   
            }   
            bShown=true;   
            intCount++;   
            intLastAddress=intAddress;   
       }   
       return 0;   
  }   

and the following test case on Linux (note: the reason we use MAP_NORESERVE is due to running within a virtual machine and thus limited real memory, we recognize dlmalloc etc may skew the results):

#include <stdlib.h>  
  #include <stdio.h>  
  #include <stdbool.h>  
  #include <errno.h>  
  #include <sys/mman.h>  
  #include <sys/types.h>  
  #include <sys/stat.h>  
  void main(){  
      void *vdFoo=0;  
      void *vdLast=0;  
      bool bShown=false;  
      int intCount=0;  
      while(1){  
          vdFoo=mmap((void *)NULL,3096,PROT_NONE,MAP_PRIVATE|MAP_NORESERVE|MAP_ANON,-1,0);  
          if(bShown==false){  
              fprintf(stdout," 0x%08x,0x%08x,0x%08x\n",vdFoo,&bShown,&main);  
              bShown=true;  
          }  
          if(vdFoo==-1){  
              fprintf(stdout,"%08x - %d (%d)\n",vdLast,errno,intCount);  
              return;  
          } else {  
              vdLast=vdFoo;  
          }  
          intCount++;  
      }  
  }  

While contrived, they allow us to demonstrate the initial indicator. Now if we compile as a 32bit process and run the Windows version on a 64bit operating system a number of times we get the following:


The same test on Linux returns:

If you review the output from both platforms, you can observe:
  • Stack, heap and code locations are all randomized between runs as expected.
  • The number of allocations prior to failing to allocate are variable.
  • The last address allocated prior to failing to allocate is the same.
Yes, you read that last point correctly; the last heap address prior to allocations failing (i.e. memory exhaustion of the process's virtual address space) is always the same. Although we got terribly excited by this behaviour we recognized these test cases were not representative of the real world. What application only ever allocates the same size?  Also we should point out that on Windows while the address is consistent across runs the last address changes across reboots. But it did convince us to investigate the idea further. 

So we next modified our test case to use random allocation sizes. On Windows the test case became:

int intSize = rand() % (5000 - 3000 + 1);  
  intAddress=HeapAlloc( GetProcessHeap(),NULL,intSize); );  

and on Linux:

int intSize=rand() % (5000 - 3000 + 1);  
  vdFoo=mmap((void *)NULL,intSize,PROT_NONE,MAP_PRIVATE|MAP_NORESERVE|MAP_ANON,-1,0);  

We also made a slight adjustment in the Linux case as we were getting an invalid parameter error quite regularly and thus an early failure when testing (on Windows we didn't experience this as would always receive ERROR_NOT_ENOUGH_MEMORY (0x08) in response to our allocation if it failed). To resolve this problem on Linux we added some logic around the testing for failure conditions to only catch memory exhaustion. So the full modified Linux (POSIX) test case looked like this:

#include <stdlib.h>  
 #include <stdio.h>  
 #include <stdbool.h>  
 #include <errno.h>  
 #include <fcntl.h>  
 #include <sys/mman.h>  
 #include <sys/types.h>  
 #include <sys/stat.h>  
 void main(int argc, char **argv){  
     void *vdFoo=0;  
     void *vdLast=0;  
     bool bShown=false;  
     int intCount=0;  
     bool bFlip=false;  
     while(1){  
         int intSize=rand() % (5000 - 3000 + 1);  
         vdFoo=mmap((void *)NULL,intSize,PROT_NONE,MAP_PRIVATE|MAP_NORESERVE|MAP_ANON,-1,0);  
         if(bShown==false){  
             fprintf(stdout,"0x%08x,0x%08x,0x%08x\n",vdFoo,&bShown,&main);  
             bShown=true;  
         }  
         if(vdFoo==-1){  
             // sometimes we get errno ==2 (invalid arugment) due to the random number  
             // we don't page align this hacks around this  
             if(errno==1){  
                 fprintf(stdout,"%08x - %d (%d)\n",vdLast,errno,intCount);  
                 return;  
             }  
         } else {  
             vdLast=vdFoo;  
             intCount++;  
         }  
     }  
 }  

These example are hopefully slightly more representative of realistic scenarios. Using these test cases we started seeing variations in last successfully allocated heap addresses on Windows close to the expected range with 134 different heap addresses prior to failure. On Linux we continued to see that the last successful allocated address was 0x00010000. Seeing this result on Windows while initially disappointing, didn't preclude the potential that entropy was heavily reduced in low memory situations when late loading a dynamically linked or shared library, while on Linux based on these results it should have been a given.

So we started thinking how could this potentially help us in the real world? We thought if the following criteria could be satisfied then it might head towards a practical application:
  • A process can be crashed then re-spawned or forked fresh OR details of current total memory use obtained.
  • Memory can be allocated in a semi controlled fashion.
  • You know the rough number of allocations required to exhaust nearly all memory from your known state.
  • You can cause or trigger a library to be loaded or bound at a point of your choosing.
This code below satisfies those requirements and serves as an example on Windows:

#include "stdafx.h"  
  #include <Windows.h>  
  int _tmain(int argc, _TCHAR* argv[])  
  {  
       bool bShown=false;  
       bool bFlip=false;  
       int intCount=0;  
       LPVOID intAddress=0;  
       LPVOID intLastAddress=0;  
       LPVOID lpAddress=0;  
       while(1){  
            int intSize = rand() % (5000 - 3000 + 1);  
            intAddress=HeapAlloc( GetProcessHeap(),NULL,intSize); //malloc(3096);  
            if(intAddress == NULL){  
                 fprintf(stdout,"0x%08x - %d (%d)\n",intLastAddress,GetLastError(),intCount);  
                 return 0;  
            }   
            if(!bShown){  
                 fprintf(stdout,"0x%08x 0x%08x 0x%08x\n",intAddress,&bShown,&_tmain);  
            }  
            bShown=true;  
            intCount++;  
            if(argc > 2){  
                 if(intCount==_wtoi(argv[1])){  
                      HMODULE hModule = NULL;  
                      hModule = LoadLibrary(argv[2]);  
                      if(hModule != NULL){  
                           VOID *vdProc = GetProcAddress(hModule,"Function");  
                           fprintf(stdout,"0x%08x\n",vdProc);  
                      } else {  
                           fwprintf(stdout,L"couldn't load %s - %d\n",argv[2],GetLastError());  
                      }  
                 }  
            }  
            intLastAddress=intAddress;  
       }  
       return 0;  
  }  

We ran the test case on Windows three times (while rebooting in between to satisfy the Windows once per boot library randomization) using a variety of different allocations, before loading our DLL, we got the following (SprayDontPray.exe [Allocations] [DLL]:


Addresses Across Reboots and Variable Allocations Before Delayed Loading of a DLL
We're not claiming these results are statistically significant, but across this small data set it showed that the function is at the same address when doing late loading while approaching the limits of maximum available virtual address space. It's also worth noting the test cases failed after a number of allocations between 2,077,554 and 2,077,930. To further help with the real world application of this technique the number of allocations that need to have occurred, yet result in the same address for the function, was anywhere between ~1,500,000 and ~2,070,000 allocations.

On Linux we modified our test case to be the following:

#include <stdlib.h>  
  #include <stdio.h>  
  #include <stdbool.h>  
  #include <dlfcn.h>  
  #include <errno.h>  
  #include <fcntl.h>  
  #include <sys/mman.h>  
  #include <sys/types.h>  
  #include <sys/stat.h>  
  void main(int argc, char **argv){  
      void *vdFoo=0;  
      void *vdLast=0;  
      bool bShown=false;  
      int intCount=0;  
      bool bFlip=false;  
      while(1){  
          int intSize=rand() % (5000 - 3000 + 1);  
          vdFoo=mmap((void *)NULL,intSize,PROT_NONE,MAP_PRIVATE|MAP_NORESERVE|MAP_ANON,-1,0);  
          if(bShown==false){  
              fprintf(stdout,"0x%08x,0x%08x,0x%08x\n",vdFoo,&bShown,&main);  
              bShown=true;  
          }  
          if(vdFoo==-1){  
              // sometimes we get errno ==2 (invalid arugment) due to the random number  
              // we don't page align this hacks around this  
              if(errno==1){  
                  fprintf(stdout,"%08x - %d (%d)\n",vdLast,errno,intCount);  
                  return;  
              }  
          } else {  
              vdLast=vdFoo;  
              intCount++;  
          }  
          if(argc > 2){  
              if(intCount==atoi(argv[1])){  
                  void *hModule = NULL;  
                  hModule = dlopen(argv[2], RTLD_NOW);  
                  if(hModule != NULL){  
                      void *vdProc = dlsym(hModule, "test");  
                      char *error = dlerror();  
                      if(error!= NULL){  
                          fprintf(stdout,"! %s\n",error);  
                      } else {  
                          fprintf(stdout,"0x%08x\n",vdProc);  
                      }  
                  } else {  
                      fprintf(stdout,"couldn't load %s - %s\n",argv[2],dlerror());  
                  }  
              }  
          }  
      }  
  }  

The results on Linux where more surprising using the above test code and a loop (while (true); do ./aslr 750000 ./libourlib.so >> ./fos.txt; done;) we were able to iterate the case over 100 times. From this run we saw the following breakdown:

Linux  2.6.38-8 Address Obtained for a Function in Shared Library Loaded Late
We found reducing the number of allocations before attempting to load the shared library increased the possible addresses.  Increasing the number of allocation before attempting to load started to further increase the number of failures to allocate memory during the run. So based on our small sample this technique appears less reliable on Linux than on Windows.

People are no doubt asking at this point if we tested this on MacOS X / iOS. In short we would have, but. The but was our POSIX compatible test case (above) just causes a kernel panic on a fully patched MacOS X (10.7.3). It doesn't look exploitable as it's actually the kernel panicking itself when it runs out of a certain type of resource.

Conclusions

So in conclusion what does this buy us? Well if you can't heap spray either because the just-in-time compiler is secure and/or non executable memory is used combined with the fact you don't posses any information leaks or ability to do a partial overwrite then the described method may just yield you the return-orientated-programming gadgets or ret2lib payload at a known address that you are looking for.

Also as a final caveat we didn't look at PaX and how it adds to the mix on Linux.

Mobile Device Special Mention

Mobile devices deserve a special mention, as people will not doubt wonder what the implications on Apple (iOS) and Android (Linux) among others. Due to the fact that devices today, in our experience, don't ship with enough physical RAM to allow user land memory exhaustion together and the fact they don't support swap we don't believe this approach will yield much (if anything) on these platforms in the short term. This does however come with some caveats:
  • Changes in the physical RAM profiles on mobile devices will obviously change the risk of this attack becoming practical.
  • Shared libraries that are backed by one single physical RAM instance than can be used to consume virtual address space.
  • Applications that use MAP_NORESERVE with mmap or memory mapped files that can be leveraged to consume a processes virtual address space without consuming actual RAM.

Other Applications of Similar Techniques

While we've focused on the late loading of libraries in this post we also foresee other potential applications for this technique. These other applications include targeting JIT compilers that produce native code. These engines could be similarly targeted to potentially produce the required gadgets at known addresses even where mitigations exist against traditional spraying techniques.

Windows 8

With the release of the Windows 8 consumer preview we took it for a spin to see if the technique would still work. The set-up was slightly different than the Windows 7 test environment, but not sufficiently so we believe it would impact the results. The Windows 8 machine was the 32bit version running inside of VirtualBox. It seems Microsoft are ahead of us here and have managed to mitigate this anomalous behaviour in Windows 8. So in short this technique won't be valid in the future...

Vendor Notifications

We did let a number of OS vendors know about this research prior to publication including Microsoft (Windows) and Google (Linux for Chrome OS). In the case of Microsoft we also worked with them to answer any questions they had and to ensure they didn't feel we were going to cause a cyber apocalypse by releasing this.

We also reported the kernel crash to Apple!