The 1.1.3 Release of the MSTG is a comprehensive manual for mobile app security testing and reverse engineering for iOS and Android mobile security testers with the following content:
Download from here:
1. Conduct all data validation on a trusted system (e.g., The server)
2. Identify all data sources and classify them into trusted and untrusted. Validate all data from untrusted sources (e.g., Databases, file streams, etc.)
3. There should be a centralized input validation routine for the application
4. Specify proper character sets, such as UTF-8, for all sources of input
5. Encode data to a common character set before validating (Canonicalize)
6. All validation failures should result in input rejection
7. Determine if the system supports UTF-8 extended character sets and if so, validate after UTF-8 decoding is completed
9. Verify that header values in both requests and responses contain only ASCII characters
10. Validate data from redirects (An attacker may submit malicious content directly to the target of the redirect, thus circumventing application logic and any validation performed before the redirect)
11. Validate for expected data types
12. Validate data range
13. Validate data length
14. Validate all input against a “white” list of allowed characters, whenever possible
15. If any potentially hazardous characters must be allowed as input, be sure that you implement additional controls like output encoding, secure task specific APIs and accounting for the utilization of that data throughout the application. Examples of common hazardous characters include: < > ” ‘ % ( ) & + \ \’ \”
16. If your standard validation routine cannot address the following inputs, then they should be checked discretely
(Utilize canonicalization to address double encoding or other forms of obfuscation attacks)
17. Conduct all encoding on a trusted system (e.g., The server)
18. Utilize a standard, tested routine for each type of outbound encoding
19. Contextually output encode all data returned to the client that originated outside the application’s trust boundary. HTML entity encoding is one example, but does not work in all cases
20. Encode all characters unless they are known to be safe for the intended interpreter
21. Contextually sanitize all output of un-trusted data to queries for SQL, XML, and LDAP
22. Sanitize all output of un-trusted data to operating system commands
23. Require authentication for all pages and resources, except those specifically intended to be public
24. All authentication controls must be enforced on a trusted system (e.g., The server)
25. Establish and utilize standard, tested, authentication services whenever possible
26. Use a centralized implementation for all authentication controls, including libraries that call external authentication services
27. Segregate authentication logic from the resource being requested and use redirection to and from the centralized authentication control
28. All authentication controls should fail securely
29. All administrative and account management functions must be at least as secure as the primary authentication mechanism
30. If your application manages a credential store, it should ensure that only cryptographically strong one-way salted hashes of passwords are stored and that the table/file that stores the passwords and keys is write-able only by the application. (Do not use the MD5 algorithm if it can be avoided)
31. Password hashing must be implemented on a trusted system (e.g., The server).
32. Validate the authentication data only on completion of all data input, especially for sequential authentication implementations
33. Authentication failure responses should not indicate which part of the authentication data was incorrect. For example, instead of “Invalid username” or “Invalid password”, just use “Invalid username and/or password” for both. Error responses must be truly identical in both display and source code
34. Utilize authentication for connections to external systems that involve sensitive information or functions
35. Authentication credentials for accessing services external to the application should be encrypted and stored in a protected location on a trusted system (e.g., The server). The source code is NOT a secure location
36. Use only HTTP POST requests to transmit authentication credentials
37. Only send non-temporary passwords over an encrypted connection or as encrypted data, such as in an encrypted email. Temporary passwords associated with email resets may be an exception
38. Enforce password complexity requirements established by policy or regulation. Authentication credentials should be sufficient to withstand attacks that are typical of the threats in the deployed environment. (e.g., requiring the use of alphabetic as well as numeric and/or special characters)
39. Enforce password length requirements established by policy or regulation. Eight characters is commonly used, but 16 is better or consider the use of multi-word pass phrases
40. Password entry should be obscured on the user’s screen. (e.g., on web forms use the input type “password”)
41. Enforce account disabling after an established number of invalid login attempts (e.g., five attempts is common). The account must be disabled for a period of time sufficient to discourage brute force guessing of credentials, but not so long as to allow for a denial-of-service attack to be performed
42. Password reset and changing operations require the same level of controls as account creation and authentication.
43. Password reset questions should support sufficiently random answers. (e.g., “favorite book” is a bad question because “The Bible” is a very common answer)
44. If using email based resets, only send email to a pre-registered address with a temporary link/password
45. Temporary passwords and links should have a short expiration time
46. Enforce the changing of temporary passwords on the next use
47. Notify users when a password reset occurs
48. Prevent password re-use
49. Passwords should be at least one day old before they can be changed, to prevent attacks on password re-use
50. Enforce password changes based on requirements established in policy or regulation. Critical systems may require more frequent changes. The time between resets must be administratively controlled
51. Disable “remember me” functionality for password fields
52. The last use (successful or unsuccessful) of a user account should be reported to the user at their next successful login
53. Implement monitoring to identify attacks against multiple user accounts, utilizing the same password. This attack pattern is used to bypass standard lockouts, when user IDs can be harvested or guessed
54. Change all vendor-supplied default passwords and user IDs or disable the associated accounts
55. Re-authenticate users prior to performing critical operations
56. Use Multi-Factor Authentication for highly sensitive or high value transactional accounts
57. If using third party code for authentication, inspect the code carefully to ensure it is not affected by any malicious code
58. Use the server or framework’s session management controls. The application should only recognize these session identifiers as valid
59. Session identifier creation must always be done on a trusted system (e.g., The server)
60. Session management controls should use well vetted algorithms that ensure sufficiently random session identifiers
61. Set the domain and path for cookies containing authenticated session identifiers to an appropriately restricted value for the site
62. Logout functionality should fully terminate the associated session or connection
63. Logout functionality should be available from all pages protected by authorization
64. Establish a session inactivity timeout that is as short as possible, based on balancing risk and business functional requirements. In most cases it should be no more than several hours
65. Disallow persistent logins and enforce periodic session terminations, even when the session is active. Especially for applications supporting rich network connections or connecting to critical systems. Termination times should support business requirements and the user should receive sufficient notification to mitigate negative impacts
66. If a session was established before login, close that session and establish a new session after a successful login
67. Generate a new session identifier on any re-authentication
68. Do not allow concurrent logins with the same user ID
69. Do not expose session identifiers in URLs, error messages or logs. Session identifiers should only be located in the HTTP cookie header. For example, do not pass session identifiers as GET parameters
70. Protect server side session data from unauthorized access, by other users of the server, by implementing appropriate access controls on the server
71. Generate a new session identifier and deactivate the old one periodically. (This can mitigate certain session hijacking scenarios where the original identifier was compromised)
72. Generate a new session identifier if the connection security changes from HTTP to HTTPS, as can occur during authentication. Within an application, it is recommended to consistently utilize HTTPS rather than switching between HTTP to HTTPS.
73. Supplement standard session management for sensitive server-side operations, like account management, by utilizing per-session strong random tokens or parameters. This method can be used to prevent Cross Site Request Forgery attacks
74. Supplement standard session management for highly sensitive or critical operations by utilizing per-request, as opposed to per-session, strong random tokens or parameters
75. Set the “secure” attribute for cookies transmitted over an TLS connection
76. Set cookies with the HttpOnly attribute, unless you specifically require client-side scripts within your application to read or set a cookie’s value
77. Use only trusted system objects, e.g. server side session objects, for making access authorization decisions
78. Use a single site-wide component to check access authorization. This includes libraries that call external authorization services
79. Access controls should fail securely
80. Deny all access if the application cannot access its security configuration information
81. Enforce authorization controls on every request, including those made by server side scripts, “includes” and requests from rich client-side technologies like AJAX and Flash
82. Segregate privileged logic from other application code
83. Restrict access to files or other resources, including those outside the application’s direct control, to only authorized users
84. Restrict access to protected URLs to only authorized users
85. Restrict access to protected functions to only authorized users
86. Restrict direct object references to only authorized users
87. Restrict access to services to only authorized users
88. Restrict access to application data to only authorized users
89. Restrict access to user and data attributes and policy information used by access controls
90. Restrict access security-relevant configuration information to only authorized users
91. Server side implementation and presentation layer representations of access control rules must match
92. If state data must be stored on the client, use encryption and integrity checking on the server side to catch state tampering.
93. Enforce application logic flows to comply with business rules
94. Limit the number of transactions a single user or device can perform in a given period of time. The transactions/time should be above the actual business requirement, but low enough to deter automated attacks
95. Use the “referer” header as a supplemental check only, it should never be the sole authorization check, as it is can be spoofed
96. If long authenticated sessions are allowed, periodically re-validate a user’s authorization to ensure that their privileges have not changed and if they have, log the user out and force them to re-authenticate
97. Implement account auditing and enforce the disabling of unused accounts (e.g., After no more than 30 days from the expiration of an account’s password.)
98. The application must support disabling of accounts and terminating sessions when authorization ceases (e.g., Changes to role, employment status, business process, etc.)
99. Service accounts or accounts supporting connections to or from external systems should have the least privilege possible
100. Create an Access Control Policy to document an application’s business rules, data types and access authorization criteria and/or processes so that access can be properly provisioned and controlled. This includes identifying access requirements for both the data and system resources
101. All cryptographic functions used to protect secrets from the application user must be implemented on a trusted system (e.g., The server)
102. Protect master secrets from unauthorized access
103. Cryptographic modules should fail securely
104. All random numbers, random file names, random GUIDs, and random strings should be generated using the cryptographic module’s approved random number generator when these random values are intended to be un-guessable
105. Cryptographic modules used by the application should be compliant to FIPS 140-2 or an equivalent standard. (See http://csrc.nist.gov/groups/STM/cmvp/validation.html)
106. Establish and utilize a policy and process for how cryptographic keys will be managed
107. Do not disclose sensitive information in error responses, including system details, session identifiers or account information
108. Use error handlers that do not display debugging or stack trace information
109. Implement generic error messages and use custom error pages
110. The application should handle application errors and not rely on the server configuration
111. Properly free allocated memory when error conditions occur
112. Error handling logic associated with security controls should deny access by default
113. All logging controls should be implemented on a trusted system (e.g., The server)
114. Logging controls should support both success and failure of specified security events
115. Ensure logs contain important log event data
116. Ensure log entries that include un-trusted data will not execute as code in the intended log viewing interface or software
117. Restrict access to logs to only authorized individuals
118. Utilize a master routine for all logging operations
119. Do not store sensitive information in logs, including unnecessary system details, session identifiers or passwords
120. Ensure that a mechanism exists to conduct log analysis
121. Log all input validation failures
122. Log all authentication attempts, especially failures
123. Log all access control failures
124. Log all apparent tampering events, including unexpected changes to state data
125. Log attempts to connect with invalid or expired session tokens
126. Log all system exceptions
127. Log all administrative functions, including changes to the security configuration settings
128. Log all backend TLS connection failures
129. Log cryptographic module failures
130. Use a cryptographic hash function to validate log entry integrity Data Protection:
131. Implement least privilege, restrict users to only the functionality, data and system information that is required to perform their tasks
132. Protect all cached or temporary copies of sensitive data stored on the server from unauthorized access and purge those temporary working files a soon as they are no longer required.
133. Encrypt highly sensitive stored information, like authentication verification data, even on the server side. Always use well vetted algorithms, see “Cryptographic Practices” for additional guidance
134. Protect server-side source-code from being downloaded by a user
135. Do not store passwords, connection strings or other sensitive information in clear text or in any non-cryptographically secure manner on the client side. This includes embedding in insecure formats like: MS viewstate, Adobe flash or compiled code
136. Remove comments in user accessible production code that may reveal backend system or other sensitive information
137. Remove unnecessary application and system documentation as this can reveal useful information to attackers
138. Do not include sensitive information in HTTP GET request parameters
139. Disable auto complete features on forms expected to contain sensitive information, including authentication
140. Disable client side caching on pages containing sensitive information. Cache-Control: no-store, may be used in conjunction with the HTTP header control “Pragma: no-cache”, which is less effective, but is HTTP/1.0 backward compatible
141. The application should support the removal of sensitive data when that data is no longer required. (e.g. personal information or certain financial data)
142. Implement appropriate access controls for sensitive data stored on the server. This includes cached data, temporary files and data that should be accessible only by specific system users
143. Implement encryption for the transmission of all sensitive information. This should include TLS for protecting the connection and may be supplemented by discrete encryption of sensitive files or non-HTTP based connections
144. TLS certificates should be valid and have the correct domain name, not be expired, and be installed with intermediate certificates when required
145. Failed TLS connections should not fall back to an insecure connection
146. Utilize TLS connections for all content requiring authenticated access and for all other sensitive information
147. Utilize TLS for connections to external systems that involve sensitive information or functions
148. Utilize a single standard TLS implementation that is configured appropriately
149. Specify character encodings for all connections
150. Filter parameters containing sensitive information from the HTTP referer, when linking to external sites
151. Ensure servers, frameworks and system components are running the latest approved version
152. Ensure servers, frameworks and system components have all patches issued for the version in use
153. Turn off directory listings
154. Restrict the web server, process and service accounts to the least privileges possible
155. When exceptions occur, fail securely
156. Remove all unnecessary functionality and files
157. Remove test code or any functionality not intended for production, prior to deployment
158. Prevent disclosure of your directory structure in the robots.txt file by placing directories not intended for public indexing into an isolated parent directory. Then “Disallow” that entire parent directory in the robots.txt file rather than Disallowing each individual directory
159. Define which HTTP methods, Get or Post, the application will support and whether it will be handled differently in different pages in the application
160. Disable unnecessary HTTP methods, such as WebDAV extensions. If an extended HTTP method that supports file handling is required, utilize a well-vetted authentication mechanism
161. If the web server handles both HTTP 1.0 and 1.1, ensure that both are configured in a similar manor or insure that you understand any difference that may exist (e.g. handling of extended HTTP methods)
162. Remove unnecessary information from HTTP response headers related to the OS, web-server version and application frameworks
163. The security configuration store for the application should be able to be output in human readable form to support auditing
164. Implement an asset management system and register system components and software in it
165. Isolate development environments from the production network and provide access only to authorized development and test groups. Development environments are often configured less securely than production environments and attackers may use this difference to discover shared weaknesses or as an avenue for exploitation
166. Implement a software change control system to manage and record changes to the code both in development and production
167. Use strongly typed parameterized queries
168. Utilize input validation and output encoding and be sure to address meta characters. If these fail, do not run the database command
169. Ensure that variables are strongly typed
170. The application should use the lowest possible level of privilege when accessing the database
171. Use secure credentials for database access
172. Connection strings should not be hard coded within the application. Connection strings should be stored in a separate configuration file on a trusted system and they should be encrypted.
173. Use stored procedures to abstract data access and allow for the removal of permissions to the base tables in the database
174. Close the connection as soon as possible
175. Remove or change all default database administrative passwords. Utilize strong passwords/phrases or implement multi-factor authentication
176. Turn off all unnecessary database functionality (e.g., unnecessary stored procedures or services, utility packages, install only the minimum set of features and options required (surface area reduction))
177. Remove unnecessary default vendor content (e.g., sample schemas)
178. Disable any default accounts that are not required to support business requirements
179. The application should connect to the database with different credentials for every trust distinction (e.g., user, read-only user, guest, administrators)
180. Do not pass user supplied data directly to any dynamic include function
181. Require authentication before allowing a file to be uploaded
182. Limit the type of files that can be uploaded to only those types that are needed for business purposes
183. Validate uploaded files are the expected type by checking file headers. Checking for file type by extension alone is not sufficient
184. Do not save files in the same web context as the application. Files should either go to the content server or in the database.
185. Prevent or restrict the uploading of any file that may be interpreted by the web server.
186. Turn off execution privileges on file upload directories
187. Implement safe uploading in UNIX by mounting the targeted file directory as a logical drive using the associated path or the chrooted environment
188. When referencing existing files, use a white list of allowed file names and types. Validate the value of the parameter being passed and if it does not match one of the expected values, either reject it or use a hard coded default file value for the content instead
189. Do not pass user supplied data into a dynamic redirect. If this must be allowed, then the redirect should accept only validated, relative path URLs
190. Do not pass directory or file paths, use index values mapped to pre-defined list of paths
191. Never send the absolute file path to the client
192. Ensure application files and resources are read-only
193. Scan user uploaded files for viruses and malware
194. Utilize input and output control for un-trusted data
195. Double check that the buffer is as large as specified
196. When using functions that accept a number of bytes to copy, such as strncpy(), be aware that if the destination buffer size is equal to the source buffer size, it may not NULL-terminate the string
197. Check buffer boundaries if calling the function in a loop and make sure there is no danger of writing past the allocated space
198. Truncate all input strings to a reasonable length before passing them to the copy and concatenation functions
199. Specifically close resources, don’t rely on garbage collection. (e.g., connection objects, file handles, etc.)
200. Use non-executable stacks when available
201. Avoid the use of known vulnerable functions (e.g., printf, strcat, strcpy etc.)
202. Properly free allocated memory upon the completion of functions and at all exit points
203. Use tested and approved managed code rather than creating new unmanaged code for common tasks
204. Utilize task specific built-in APIs to conduct operating system tasks. Do not allow the application to issue commands directly to the Operating System, especially through the use of application initiated command shells
205. Use checksums or hashes to verify the integrity of interpreted code, libraries, executables, and configuration files
206. Utilize locking to prevent multiple simultaneous requests or use a synchronization mechanism to prevent race conditions
207. Protect shared variables and resources from inappropriate concurrent access
208. Explicitly initialize all your variables and other data stores, either during declaration or just before the first usage
209. In cases where the application must run with elevated privileges, raise privileges as late as possible, and drop them as soon as possible
210. Avoid calculation errors by understanding your programming language’s underlying representation and how it interacts with numeric calculation. Pay close attention to byte size discrepancies, precision, signed/unsigned distinctions, truncation, conversion and casting between types, “not-a-number” calculations, and how your language handles numbers that are too large or too small for its underlying representation
211. Do not pass user supplied data to any dynamic execution function
212. Restrict users from generating new code or altering existing code
213. Review all secondary applications, third party code and libraries to determine business necessity and validate safe functionality, as these can introduce new vulnerabilities
214. Implement safe updating. If the application will utilize automatic updates, then use cryptographic signatures for your code and ensure your download clients verify those signatures. Use encrypted channels to transfer the code from the host server
This is a copy of the SCP checklist. For the project, see OWASP Secure Coding Practices – Quick Reference Guide.
Published 18 April 2019 – ID G00346593 – 58 min read
DevSecOps, modern web application design and high-profile breaches are expanding the scope of the AST market. Security and risk management leaders will need to meet tighter deadlines and test more complex applications by accelerating efforts to integrate and automate AST in the software life cycle.
By 2022, 10% of coding vulnerabilities identified by static application security testing (SAST) will be remediated automatically with code suggestions applied from automated solutions, up from less than 1% today.
Gartner defines the application security testing (AST) market as the sellers of products and services designed to analyze and test applications for security vulnerabilities.Gartner identifies three main styles of AST:
AST can be delivered as a tool or as a subscription service. Many vendors offer both options to reflect enterprise requirements.The 2019 Magic Quadrant will focus on a vendor’s SAST, DAST and IAST offerings, maturity and features as tools or as a service. Gartner has observed that the major driver in the evolution of the AST market is the need to support enterprise DevOps initiatives. In a DevOps environment, customers require offerings that provide a higher degree of automation, correlation of findings and integration with DevOps pipeline tools. In general, clients desire solutions that focus on high-assurance, high-value findings with fast turnaround times. Buyers expect offerings to fit earlier in the development process with testing often driven by developers rather than security specialists, and tightly integrated as part of the build and release process. As a result, this market evaluation focuses more heavily on the buyer’s needs when it comes to supporting rapid and accurate testing that is capable of being integrated in an increasingly automated fashion throughout the software development life cycle (SDLC).AST vendors innovating, partnering and offering runtime application self-protection (RASP; a technology for allowing applications to protect themselves from vulnerability exploitation at runtime) were weighted more heavily. Those offering software composition analysis (SCA; a technology used to identify open-source and third-party components in use in an application and their known security vulnerabilities) were also weighted more heavily. These solutions help organizations to deliver security throughout the SDLC and to further automate identifying and mitigating risks.Business-critical application security platforms, which incorporate AST for ERP platforms, are not the focus of this Magic Quadrant. Although we weigh coverage of these platforms from broader AST solutions, specific solutions in the business-critical application security space typically focus on a single platform. They go beyond code analysis by incorporating modules such as configuration checks, vulnerability management and intrusion monitoring, which are all out of scope for this research.AST for mobile applications is also not a major focus of this Magic Quadrant. Although Gartner has observed that enterprises today employ AST techniques represented in this research for the mobile app analysis use case, it is not a major driver of client requirements. They often obtain these capabilities from specialized mobile-centric vendors as well as from vendors evaluated in this research. The three styles of AST, as well as techniques for behavioral analysis, are often employed to analyze source, byte or binary or code. In addition, they observe the behavior of mobile apps to identify coding, design, packaging, deployment and runtime conditions that introduce security vulnerabilities.
Figure 1. Magic Quadrant for Application Security Testing
Source: Gartner (April 2019)
Based in the U.S. and France, CAST is a software intelligence vendor that focuses on reliability, efficiency and security. CAST provides enterprise SAST with the CAST Application Intelligence Platform (AIP). CAST also provides a desktop SAST solution, as well as CAST Highlight, which is an offering that provides SAST pattern analysis and SCA. The CAST Security Dashboard enables application security professionals to plan and resolve application security vulnerabilities.During the past 12 months, the vendor acquired Antelink and integrated its SCA capabilities into CAST Highlight. CAST also improved automation by adding the ability to deliver new rules into the CAST platform between releases, without requiring an upgrade from the end user. The vendor also improved data flow capabilities and the analysis engine to improve performance in identifying security violations in very large applications. CAST also made improvements to its dashboard and management functionality.CAST will appeal to large enterprises that require a solution that combines security testing with quality testing, particularly for those that already leverage CAST AIP in the development process.
Based in Israel, Checkmarx has a strong reputation for its SAST solution, has a significant presence in North America and Europe, and also serves the APAC region. Checkmarx provides CxSAST, which is a SAST product with broad language coverage that provides a variety of options to customize it for specific applications (such as by writing custom tests). Checkmarx also provides Checkmarx Open Source Analysis (CxOSA) with its partner, WhiteSource, for SCA. The vendor incorporates its CxCodebashing solution in the offering, which is a developer education platform that delivers short, gamified modules for secure coding training. Checkmarx’s managed service, AppSec Accelerator, offers SAST and DAST services (leveraging third-party DAST tools), an IAST solution called CxIAST, as well as program support to help development organizations integrate AST into their SDLCs.During the past 12 months, the vendor has largely focused on extending the capabilities of the unified management and orchestration layer in the Checkmarx Software Exposure Platform. The vendor added unified policy management, cross-product correlation and intelligent remediation, as well as Kotlin language support. Checkmarx’s products will appeal to application development and security organizations that are seeking a comprehensive set of AST products and services with a strong set of enterprise-class SAST capabilities and program support services.
Based in the U.S. and present in North America, Contrast Security is an AST vendor that also sells in the European and APAC regions. Contrast Security’s IAST (Contrast Assess) incorporates SCA. Contrast also offers RASP with its Contrast Protect product, which can be licensed independently or jointly with Assess. Contrast also offers a central management console, the Contrast TeamServer, which can be delivered as a service or on-premises. The testing approach, known as self-testing or passive IAST, does not require an external scanning component to generate attack patterns to identify vulnerabilities; rather, it is driven by application test activity, such as QA, executed automatically or manually.During the past 12 months, Contrast Security has released new bug-tracking integrations, a new feature to get real-time visibility into testing coverage by showing what code paths were tested, and expanded platform as a service (PaaS) by adding support for Azure Web Service.Contrast is a good fit for organizations pursuing a DevOps methodology and looking for approaches to insert automated, continuous security testing that’s transparent to developers and testers.
Based in the U.S., IBM is a global vendor of IT services and products. In December 2018, HCL announced the acquisition of several IBM products, including the AppScan IBM AST suite. The acquisition is to be completed in 2019. HCL has been solely responsible for development and support for the past two years, and is directly engaged with clients.The AppScan portfolio includes AppScan Source and AppScan Standard for desktop SAST and DAST, respectively. It also provides AppScan Enterprise, which is an AST enterprise platform. IBM also provides AST as SaaS with IBM Security Application Security on Cloud (ASoC). The offerings within the portfolio can be used separately or in combination; for example, they can share scan configurations and settings across offerings. IBM’s IAST technology, called glass box, is included as part of the DAST offerings. IBM also offers Open Source Analyzer (OSA) for SCA, which licenses the vulnerability and remediation database from a partner.During the past 12 months, IBM added action-based crawling (ABC) to facilitate DAST, enabling a browser to interact with the crawled application and execute its components. It also added an AppScan Issue Management Gateway to synchronize ASoC with issue management tools, such as Jira Software from Atlassian. IBM has expanded its API-based automation capabilities for dynamic scanning with AppScan Enterprise, enhanced its IDE and CI plug-ins, and added predefined and custom policies for easier compliance. IBM also added capabilities to leverage Swagger to automate DAST scanning of REST APIs and added SAST language support for Python and Angular.AppScan will appeal to enterprises seeking a single provider of AST technologies with a focus on risk-based management and enterprise-class capabilities.
Based in the U.K., Micro Focus is a global provider of AST products and services under the well-known Fortify brand. Micro Focus sales have a global reach, with a strong presence in North America, as well as the European and APAC markets. Fortify offers Static Code Analyzer (SAST), WebInspect (DAST and IAST), Software Security Center (its console) and Application Defender (monitoring and RASP). Fortify provides its AST as a product, as well as in the cloud, with Fortify on Demand (FoD). Mobile AST is delivered via FoD. Fortify’s SAST can leverage real-time, in-line vulnerability detection via a spell-checker (called Security Assistant) in the Eclipse and Visual Studio IDE. Security Assistant highlights vulnerable code as the developer programs.During the past year, Fortify has come out with Visual Studio support for Security Assistant, expanded its SCA partnerships to include Black Duck, Sonatype and Synk and made turnaround time improvements to FoD DAST. Micro Focus Fortify’s AST offerings should be considered by enterprises looking for a comprehensive set of AST capabilities, either as a product or service, or both combined, with enterprise-class reporting and integration capabilities.
Based in Foster City, California, Qualys is a provider of cloud-based security services, with an emphasis on vulnerability assessment/vulnerability management (VA/VM). It has a strong presence in North America and the APAC region, as well as a presence in the European market. Qualys offers Web Application Scanning (WAS), which is a DAST service that is completely automated and integrates with the other Qualys security services in the Qualys Cloud Platform. Qualys provides WAS at an affordable per-year subscription in different small or midsize business (SMB) and enterprise packages, as well as pay-per-scan licensing.During the past year, Qualys released Version 6 of WAS, which introduced support for testing of Swagger-based REST APIs, added a Jenkins CI plug-in, and made available a Chrome extension to record browser activity for replay in WAS.Qualys is a visible DAST as a cloud service with sizable market share. Organizations looking for a lower-cost, automated DAST service that provides malware scanning or those looking for a DAST capability as an extension to their VA/VM program should consider Qualys.Qualys did not respond to requests for supplemental information, although it did provide final factual review. Therefore, Gartner analysis is based on other credible and accepted public sources.
Based in Boston, Massachusetts, Rapid7 is a provider of security, data, analytics software and IT services. In the AST space, Rapid7 provides DAST as a product and a service. Its offering consists of a desktop web app scanner called AppSpider Pro, an on-premises enterprise DAST tool called AppSpider Enterprise and DAST as a service, under the name InsightAppSec. In addition, Rapid7 provides Managed AppSec services, which offer the same DAST service in a completely outsourced fashion and also includes vulnerability validation services.During the past 12 months, Rapid7 introduced the ability to scan using uploaded Swagger and WSDL files, incremental scanning and validation scanning, which allows users to confirm that a vulnerability remediation was effective. Rapid7 also added an InsightAppSec public API to allow use of the tool without passing through the user interface. Also, Rapid7 introduced a system in the InsightAppSec offering that manages, shares and encrypts authentication session recordings. Rapid7 has built a scan activity feed that shows details of scans in progress, completed or failed, also in the InsightAppSec offering. In October 2018, Rapid7 acquired RASP vendor tCell.In addition to a granular and customizable DAST solution, Rapid7 and its Insight cloud includes DAST (InsightAppSec), vulnerability management (InsightVM), security information and event management (InsightDR), security orchestration automation and response (InsightConnect), and log storing and analytics (InsightOps). This can be a good fit for organizations looking for a SecOps enabler.
Based in Mountain View, California, Synopsys is a global company with offerings in the software and semiconductor areas. Synopsys has been executing a strategy to expand its AST portfolio during the past few years, adding Cigital (App Sec Services), Quotium’s Seeker IAST, Codenomicon (SCA), Protecode (SCA), Coverity (SAST) and Black Duck (SCA). This merger and acquisition (M&A) push has provided it with good coverage of the secure SDLC market, through products and services that it has been attempting to integrate into a complete, seamless offering.During the past 12 months, the vendor has introduced a new platform, Polaris, which is intended to be the central management console for all Synopsys AST products. The SAST solution was the first to be fully integrated into Polaris, and the vendor intends to integrate the rest of the platform throughout 2019. The vendor also introduced a new lightweight IDE plug-in (initially for the IntelliJ IDE with support for Eclipse and Visual Studio IDEs introduced in February 2019), Code Sight, meant to run full SAST analysis by continuously scanning in the background while a developer is coding. Synopsys should be considered by organizations looking for a complete AST offering that want variety in AST technologies, assessment depth, deployment options and licensing.
Headquartered in the U.S., Veracode is an AST provider with a strong presence in the North American market, as well as a presence in the European market. The Veracode offering includes a family of products that provide SAST, DAST and SCA services. Veracode also provides mobile AST and a vendor security testing attestation program known as Veracode VAST.During this evaluation, CA Technologies (which had previously acquired Veracode) was acquired by Broadcom, with the deal closing in November 2018. In the same month, it was announced that Veracode would be acquired from Broadcom by private equity firm Thoma Bravo. During the past 12 months, Veracode acquired an SCA company, SourceClear, which it has started to integrate into the Veracode platform. The vendor expanded SAST language coverage, upgraded its DAST engine for increased performance and accuracy, and consolidated previously segmented components into a single DAST offering.Veracode will meet the requirements of organizations looking for a complete portfolio of AST services, with broad language and framework coverage and ease of implementation and use.
Based in the U.S., WhiteHat Security is a global provider of AST as a service. WhiteHat Sentinel provides SAST, SCA and DAST, with specific versions for development, build and operation phases. Sentinel SAST can scan both binaries and source code. WhiteHat Security also provides mobile testing in partnership with NowSecure. The results of all WhiteHat DAST, SCA, mobile AST and SAST scans can be reviewed upon request by an expert in WhiteHat’s Threat Research Center before delivery to the customer. When on-premises scanning is a requirement, WhiteHat Security uses a virtual machine that keeps some of the analysis local and sends limited, nonsensitive data to the SaaS back end.During the past 12 months, WhiteHat Security enhanced its API security testing solution, introduced automated machine-learning-based vulnerability verification, and improved DAST testing of single-page applications. Also, WhiteHat introduced stand-alone SCA and has started to segment its SAST and SCA offerings with varying levels of depth and automation for different phases of the SDLC.WhiteHat Security should be considered for buyers seeking an AST SaaS platform and, especially, DAST services. This is largely handled by an expert, cloud-service-testing provider with a scalable solution.
We review and adjust our inclusion criteria for Magic Quadrants as markets change. As a result of these adjustments, the mix of vendors in any Magic Quadrant may change over time. A vendor’s appearance in a Magic Quadrant one year and not the next does not necessarily indicate that we have changed our opinion of that vendor. It may be a reflection of a change in the market and, therefore, changed evaluation criteria, or of a change of focus by that vendor.
CAST and Acunetix were added.
Positive Technologies, SiteLock and Trustwave were dropped based on our inclusion and exclusion criteria.
To qualify for inclusion, vendors need to meet the following criteria as of 1 October 2018:
We will not include vendors in this research that:
Magic Quadrants are used to evaluate the commercial offering, sales execution, vision, marketing and support of products in the market. This excludes the evaluation of open-source software (OSS) or vendor products that rely heavily on and bundle open-source tools.
Several vendors that are not evaluated in this Magic Quadrant are present in the AST space or in markets that overlap with AST. These vendors do not currently meet our inclusion criteria; however, they either provide AST features or address specific AST requirements and use cases. These providers range from consultancies and professional services to related solution categories, including:
Gartner tracks and can discuss in inquiry specific additional AST vendors, including: edgescan, Fasoo, GitLab, GrammaTech, ImmuniWeb, Kiuwan, Netsparker, NSFOCUS, N-Stalker, Onapsis (Virtual Forge), PortSwigger, Positive Technologies, SiteLock, SonarQube, Trustwave and Wallarm, as well as embedded functionality from major public cloud providers. In addition, we track and can discuss vendors in the listed adjacent markets (see “Hype Cycle for Application Security, 2018”).
Product or Service: This refers to core goods and services that compete in and/or serve the defined market. It includes current product and service capabilities, quality, feature sets and skills, among others. This can be offered natively or through OEM agreements/partnerships as defined in the market definition and detailed in the subcriteria.This criterion specifically evaluates current core AST product/service capabilities, quality and accuracy and feature sets. Also, the efficacy and quality of ancillary capabilities and integration into the software development life cycle are valued.Overall Viability: Viability includes an assessment of the organization’s overall financial health as well as the financial and practical success of the business unit. It views the likelihood of the organization to continue to offer and invest in the product as well as the product position in the current portfolio.Specifically, we look at the vendor’s focus on AST, its growth and estimated AST market share, as well as its customer base.Sales Execution/Pricing: This criterion refers tothe organization’s capabilities in all presales activities and the structure that supports them. This includes deal management, pricing and negotiation, presales support and the overall effectiveness of the sales channel.We are specifically looking for capabilities such as how the vendor supports proofs of concept or pricing options for both simple and complex use cases. The evaluation will also include feedback received from clients on experiences with vendor sales support, pricing and negotiations.Market Responsiveness/Record: This is theability of the vendor to respond, change direction, be flexible and achieve competitive success as opportunities develop, competitors act, customer needs evolve, and market dynamics change. This criterion also considers the vendor’s history of responsiveness to changing market demands.We evaluate how the vendor’s broader application security capabilities match with enterprises’ functional requirements, and the vendor’s track record in delivering innovative features when the market demands them. We also account for vendors’ appeal with security technologies complementary to AST.Marketing Execution: This criterion describesthe clarity, quality, creativity and efficacy of programs designed to deliver the organization’s message in order to influence the market, promote the brand, increase awareness of products and establish a positive identification in the minds of customers. This “mind share” can be driven by a combination of publicity, promotional, thought leadership, social media, referrals and sales activities.We evaluate elements such as the vendor’s reputation and credibility among security specialists.Customer Experience: This refers toproducts and services and/or programs that enable customers to achieve anticipated results with the products evaluated. Specifically, this includes quality supplier/buyer interactions, technical support, or account support. This may also include ancillary tools, customer support programs, availability of user groups, service-level agreements, and others.We evaluate elements such as the ease of use of the tool as perceived by end users and customers.
|Product or Service||High|
Source: Gartner (March 2019)
Market Understanding: We weight a vendor’sability to understand customer needs and translate them into products and services. This refers to vendors that show a clear vision of their market — listen, understand customer demands, and can shape or enhance market changes with their added vision.It includes the vendor’s ability to understand buyers’ needs and translate them into effective and usable AST (SAST, DAST and IAST) products and services.In addition to examining a vendor’s key competencies in this market, we assess its awareness of the importance of:
Marketing Strategy: This refers to aclear, differentiated messaging consistently communicated internally, externalized through social media, advertising, customer programs, and positioning statements.The visibility and credibility of the vendor’s security research labs is also a consideration. We will also consider how well that messaging informs the suitability of the vendor’s solution for evolving client needs.Sales Strategy: This criterion describesa sound strategy for selling that uses the appropriate networks including: direct and indirect sales, marketing, service, and communication. It also include whether the vendor has partners that extend the scope and depth of market reach, expertise, technologies, services and their customer base.Specifically, we look at how a vendor reaches the market with its solution and sells it — for example, leveraging partners and resellers, security reports or web channels.Offering (Product) Strategy: This refers toan approach to product development and delivery that emphasizes market differentiation, functionality, methodology, and features as they map to current and future requirements.Specifically, we are looking at the product and service AST offering, and how its extent and modularity can meet different customer requirements and testing program maturity levels.We evaluate the vendor’s development and delivery of a solution that is differentiated from the competition in a way that uniquely addresses critical customer requirements.We also look at how offerings can integrate relevant non-AST functionality that can enhance the security of applications overall.Innovation: Direct, related, complementary, and synergistic layouts of resources, expertise or capital for investment, consolidation, defensive or pre-emptive purposes are considered.Specifically, we look at how vendors are innovating to support evolving client requirements to support testing for DevOps initiatives as well as API security testing, serverless and microservices architecture. We also evaluate developing methods to make security testing more accurate. We value innovations in IAST, but also in areas such as SCA, RASP and behavioral testing.We also value innovation in DAST to support modern web and infrastructural requirements such as rich internet application (RIA) and cloud platforms.Geographic Strategy: This criterion evaluates the vendor’s strategy to direct resources, skills and offerings to meet the specific needs of geographies outside the “home” or native geography, either directly or through partners, channels and subsidiaries, as appropriate for that geography and market. We evaluate the worldwide availability and support for the offering, including local language support for tools, consoles and customer service.
|Offering (Product) Strategy||High|
|Business Model||Not Rated|
|Vertical/Industry Strategy||Not Rated|
Source: Gartner (March 2019)
Leaders in the AST market demonstrate breadth and depth of AST products and services. Leaders typically provide mature, reputable SAST and DAST, and demonstrate vison through development of IAST or other emerging AST techniques in their solutions. Leaders also should provide organizations with AST-as-a-service delivery models for testing, or with a choice of a tool and AST as a service, as well as an enterprise-class reporting framework supporting multiple users, groups and roles, ideally via a single management console. Leaders should be able to support the testing of mobile applications and should exhibit strong execution in the core AST technologies they offer. While they may excel in specific AST categories, Leaders should offer a complete platform with strong market presence, growth and client retention.
Challengers in this Magic Quadrant are vendors that have executed consistently, often with strength in a particular technology (for example, SAST or DAST) or by focusing on a single delivery model (for example, on AST as a service only). In addition, they have demonstrated substantial competitive capabilities against the Leaders in their particular focus area and have demonstrated momentum in their customer base in terms of overall size and growth.
Visionaries in this Magic Quadrant are vendors that are particularly innovative in AST with a strong vision that addresses the evolving needs of the market. It includes vendors that provide innovative capabilities to accommodate DevOps, to integrate in the SDLC, or to identify vulnerabilities with alternative technologies to established SAST and DAST, such as IAST. Visionaries may not execute as consistently as Leaders or Challengers and may not have comprehensive offerings in terms of SAST, DAST and IAST.
Niche Players offer viable, dependable solutions that meet the needs of specific buyers. Niche Players are less likely to appear on shortlists, but fare well when considered for buyers looking for “best of breed” or “best fit” to address a particular business and technical use case that matches the vendor’s focus. Niche Players may address subsets of the overall market, and often can do so more efficiently than the Leaders. Enterprises tend to pick Niche Players when the focus is on a few important functions, or on specific vendor expertise or when they have an established relationship with the vendor. Niche Players typically focus on a specific type of AST technology or delivery model, or a specific geographic region.
Through 2022, the AST market is projected to have a 10% compound annual growth rate (CAGR). This continues to be a fast-growing segment in the information security space, which itself is expected to grow at a five-year CAGR of 9%. The AST market size is estimated to reach $1.15 billion by the end of 2019.A trend of acquisitions and shake-ups to major players in the AST market continued in 2018, though the development of new solutions to counteract long-standing challenges with AST was somewhat muted. In November of 2018, Broadcom finalized the planned acquisition of CA Technologies, at the same time selling off the Veracode business unit for $950 million to Thoma Bravo.1 The same private equity firm had agreed to acquire Imperva, a leader in the web application firewall market (see “Magic Quadrant for Web Application Firewall”) only a month earlier.2 After entering into an IP partnership with HCL Technologies, IBM announced it would sell its AppScan software to HCL in December of 2018.3 Buyers should enter long-term contracts cautiously, given the volatility exhibited in the market.In addition, the market exhibits signs of increasing consolidation and commoditization, at least with respect to SAST and DAST for traditional web applications. Most major workflows and requirements have been worked out, and fewer development teams today are starting secure SDLC practices from scratch, instead relying on widely practiced architectures that they have solidified over the past few years. In 2018, the number of Gartner end-user client conversations on the fundamentals of secure application development decreased by around 45% from the year prior, marking more standardization around secure SDLC practices.4 The continued maturation of programs has led to some homogenization around core practices and the AST features required to support them. Innovations that seemed novel only a few years prior, such as the use of ML to reduce false positives, are now increasingly must-have features. Gartner believes this will continue for some years, which is good news for customers. As vendor capabilities and the programs they support converge, it becomes easier for clients to get the features they want at competitive prices. However, newer trends in application development such as DevSecOps, containers, serverless and edge computing have not fit well with the traditional toolsets, and Gartner predicts a second wave of innovation to address these challenges. End-user client inquiries around emerging topics such as DevSecOps (34% year over year), container security (55% year over year) and API security (77% year over year) increased.4Vendor portfolios remained largely unchanged in the face of ongoing and anticipated shifts, with few of the new offerings or innovative developments witnessed in years prior and with most innovation coming from smaller vendors. This forces many clients to pursue point solutions to address emerging use cases, such as vulnerability identification in APIs, and other facets of modern application development such as serverless applications. The continued acquisitions, coupled with the stagnation in new development, point to a mature market coalescing around a well-defined use case — the identification of self-inflicted vulnerabilities in custom-code web applications. This likely comes as a disappointment to many AST clients who still struggle to embed AST into their software development life cycle while meeting the challenges of modern development paradigms. Gartner clients expect AST players to advance their offerings to meet these related challenges by, for example, improving their capabilities to analyze APIs and inspect containers. However, this will need to be matched by increasing maturity in organizational application security disciplines and DevOps practices. Many clients have sought out point solutions from innovative startups to address challenges such as those in testing APIs. This is a potential missed opportunity for AST vendors if they fail to capture this use case. Yet there appears to be no shortage of business from existing solutions, and client inquiry indicates that much of the average organization’s web or enterprise IT application portfolio still needs to be tested.SCA solutions have become critical components of application security programs as more of the codebase incorporates open-source components. SCA products analyze application composition to detect components known to have security and/or functionality vulnerabilities or that require proper licensing. It helps ensure that the enterprise software supply chain includes only components that have undergone security testing and, therefore, supports secure application development and assembly. Gartner clients have long sought these capabilities from AST vendors. As such, vendors in this Magic Quadrant deliver SCA through homegrown solutions or partnerships with leading SCA vendors to supply analysis and governance capabilities to their clients.A distinct category exists in application security for solutions that are aimed at supporting security testing and vulnerability assessment for mission-critical, proprietary, commercial, off-the-shelf (COTS) applications. Business-critical application security is the set of processes and technologies that focuses on the security, risk and compliance of business-critical applications, most notably ERP; but it can also be extended to human resources and other business-critical applications.CSSTPs represent a significant deviation from traditional application and security penetration testing services, but have the potential to disrupt the traditional model and offer significant but often supplementary benefits. CSSTPs leverage a large pool of crowdsourced security testing practitioners to identify vulnerabilities through penetration testing and other techniques. CSSTPs also offer bug bounty program administration services, which often include options for vetting bounty seekers and payment processing, as well as options for full public or smaller private/invite-only bounty programs. CSSTP services enable organizations to leverage a diverse range of skills that might otherwise be difficult to replicate with traditional consulting services or AST. Thus, CSSTPs can augment an organization’s application security expertise. Gartner has already observed partnerships between AST and CSSTP vendors.Four main market observations are worthy of note:
Gartner used the following input to develop this Magic Quadrant:
1“Thoma Bravo to Buy Software Security Firm Veracode for $950 Million.” Reuters.2“Buyout Firm Thoma Bravo Adds Imperva to Cyber Portfolio.” Reuters.3“HCL Technologies to Acquire Select IBM Software Products for $1.8B.” Canada News Wire.4 Conclusions are based on end-user client inquiry data collected for calendar year 2018.
Product/Service: Core goods and services offered by the vendor for the defined market. This includes current product/service capabilities, quality, feature sets, skills and so on, whether offered natively or through OEM agreements/partnerships as defined in the market definition and detailed in the subcriteria.Overall Viability: Viability includes an assessment of the overall organization’s financial health, the financial and practical success of the business unit, and the likelihood that the individual business unit will continue investing in the product, will continue offering the product and will advance the state of the art within the organization’s portfolio of products.Sales Execution/Pricing: The vendor’s capabilities in all presales activities and the structure that supports them. This includes deal management, pricing and negotiation, presales support, and the overall effectiveness of the sales channel.Market Responsiveness/Record: Ability to respond, change direction, be flexible and achieve competitive success as opportunities develop, competitors act, customer needs evolve and market dynamics change. This criterion also considers the vendor’s history of responsiveness.Marketing Execution: The clarity, quality, creativity and efficacy of programs designed to deliver the organization’s message to influence the market, promote the brand and business, increase awareness of the products, and establish a positive identification with the product/brand and organization in the minds of buyers. This “mind share” can be driven by a combination of publicity, promotional initiatives, thought leadership, word of mouth and sales activities.Customer Experience: Relationships, products and services/programs that enable clients to be successful with the products evaluated. Specifically, this includes the ways customers receive technical support or account support. This can also include ancillary tools, customer support programs (and the quality thereof), availability of user groups, service-level agreements and so on.Operations: The ability of the organization to meet its goals and commitments. Factors include the quality of the organizational structure, including skills, experiences, programs, systems and other vehicles that enable the organization to operate effectively and efficiently on an ongoing basis.
Market Understanding: Ability of the vendor to understand buyers’ wants and needs and to translate those into products and services. Vendors that show the highest degree of vision listen to and understand buyers’ wants and needs, and can shape or enhance those with their added vision.Marketing Strategy: A clear, differentiated set of messages consistently communicated throughout the organization and externalized through the website, advertising, customer programs and positioning statements.Sales Strategy: The strategy for selling products that uses the appropriate network of direct and indirect sales, marketing, service, and communication affiliates that extend the scope and depth of market reach, skills, expertise, technologies, services and the customer base.Offering (Product) Strategy: The vendor’s approach to product development and delivery that emphasizes differentiation, functionality, methodology and feature sets as they map to current and future requirements.Business Model: The soundness and logic of the vendor’s underlying business proposition.Vertical/Industry Strategy: The vendor’s strategy to direct resources, skills and offerings to meet the specific needs of individual market segments, including vertical markets.Innovation: Direct, related, complementary and synergistic layouts of resources, expertise or capital for investment, consolidation, defensive or pre-emptive purposes.Geographic Strategy: The vendor’s strategy to direct resources, skills and offerings to meet the specific needs of geographies outside the “home” or native geography, either directly or through partners, channels and subsidiaries as appropriate for that geography and market.
Here is a list of discussion points that code reviewers; peer developers need to take into consideration. This list is not comprehensive but a suggestion starting point for an enterprise to make sure code reviews are effective and not disruptive and a source of discourse. If code reviews become a source of discourse within an organization the effectiveness of finding security, functional bugs will decline and developers will find a way around the process. Being a good code reviewer requires good social skills, and is a skill that requires practice just like learning to code.
• You don’t have to find fault in the code to do a code review. If you always find something to criticize your comments will loose credibility.
• Do not rush a code review. Finding security and functionality bugs is important but other developers or team members are waiting on you so you need to temper your do not rush with the proper amount urgency.
• When reviewing code you need to know what is expected. Are you reviewing for security, functionality, maintainability, and/or style? Does your organization have tools and documents on code style or are you using your own coding style? Does your organization give tools to developers to mark unacceptable coding standards per the organizations own coding standards?
• Before beginning a code review does your organization have a defined way to resolve any conflicts that may come up in the code review by the developer and code reviewer?
• Does the code reviewer have a defne set of artifacts that need to be produce as the result of the code review?
• What is the process of the code review when code during the code review needs to be changed?
• Is the code reviewer knowledgeable about the domain knowledge of the code that is being reviewed? Ample evidence abounds that code reviews are most effective if the code reviewer is knowledgeable about the domain of the code I.e. Compliance regularizations for industry and government, business functionality, risks, etc.
Source: OWASP Code Review Guide 2.0
Welcome to the second edition of the OWASP Code Review Guide Project. The second edition brings the
successful OWASP Code Review Guide up to date with current threats and countermeasures. This version
also includes new content reflecting the OWASP communities’ experiences of secure code review
The Second Edition of the Code Review Guide has been developed to advise software developers and
management on the best practices in secure code review, and how it can be used within a secure software
development life-cycle (S-SDLC). The guide begins with sections that introduce the reader to
secure code review and how it can be introduced into a company’s S-SDLC. It then concentrates on
specific technical subjects and provides examples of what a reviewer should look for when reviewing
The contents and the structure of the book have been carefully designed. Further, all the contributed chapters have been judiciously
edited and integrated into a unifying framework that provides uniformity in structure and style.
This book is written to satisfy three different perspectives.
Executive Summary Legacy software acquisition and development practices in the DoD do not provide the agility to deploy new software “at the speed of operations”. In addition, security is often an afterthought, not built in from the beginning of the lifecycle of the application and underlying infrastructure. DevSecOps is the industry best practice for rapid, secure software development. DevSecOps is an organizational software engineering culture and practice that aims at unifying software development (Dev), security (Sec) and operations (Ops). The main characteristic of DevSecOps is to automate, monitor, and apply security at all phases of the software lifecycle: plan, develop, build, test, release, deliver, deploy, operate, and monitor. In DevSecOps, testing and security are shifted to the left through automated unit, functional, integration, and security testing – this is a key DevSecOps differentiator since security and functional capabilities are tested and built simultaneously. The benefits of adopting DevSecOps include: • Reduced mean-time to production: the average time it takes from when new software features are required until they are running in production; • Increased deployment frequency: how often a new release can be deployed into the production environment; • Fully automated risk characterization, monitoring, and mitigation across the application lifecycle; • Software updates and patching at “the speed of operations”. This DoD Enterprise DevSecOps Reference Design describes the DevSecOps lifecycle, supporting pillars, and DevSecOps ecosystem; lists the tools and activities for DevSecOps software factory and ecosystem; introduces the DoD enterprise DevSecOps container service that provides hardened DevSecOps tools and deployment templates to the program application DevSecOps teams to select; and showcases a sampling of software factory reference designs and application security operations. This DoD Enterprise DevSecOps Reference Design provides implementation and operational guidance to Information Technology (IT) capability providers, IT capability consumers, application teams, and Authorizing Officials.