Chapter 3 – Security Design Guidelines for Web Services
- J.D. Meier, Carlos Farre, Jason Taylor,
Prashant Bansode, Steve Gregersen, Madhu Sundararajan, Rob Boucher
- Security Architecture and Design Issues for Web Services
- Deployment Considerations
- Auditing and Logging
- Configuration Management
- Exception Management
- Message Protection
- Message Validation
- Sensitive Data
- Session Management
Designing a Web service with security in mind presents developers and architects with an interesting set of challenges. Some are unique to service-oriented architecture and some are similar to the challenges that face enterprise Web application development
A Web service is most commonly implemented as a wrapper – that is, as an interface between a client consuming the service and back-end business logic components doing the actual work. A Web service acts as a trust boundary in your application architecture.
By its nature, a Web service acts as a gateway between trusted business components and less trusted or untrusted client components. For this reason, it is impossible to think about the security of a Web service without also thinking about authentication, authorization,
protection of sensitive data on the network, and handling potentially malicious input. Each of these areas represents key decisions you will need to make in order to maintain the security of your application. By following security best practices in the design
of your Web service, you can use proven practices to improve your decision-making capabilities and make a cascading positive impact on the overall security of your application. Use the following design guidelines to reduce wasted effort trying to solve security
problems for which there are already best practices in place to improve the security of your service.
Security Architecture and Design Issues for Web Services
During the design phase, it is important to think like an attacker and consider potential vulnerabilities that can impact your service. A clear understanding of attacks and vulnerabilities will put you in the right mindset to mitigate potential problems and
create a design that is resistant to malicious attack. The following table outlines key problem areas for each category in the Web service security frame.
||Potential problem due to bad design
|Auditing and logging
||Failure to observe signs of intrusion; Inability to prove a user’s actions; Difficulties in problem diagnosis
||Identity spoofing; Password cracking; Elevation of privileges; Unauthorized access
||Access to confidential or restricted data; Tampering; Execution of unauthorized operations
||Unauthorized access to administration interfaces; Unauthorized ability to update configuration data; Unauthorized access to user accounts and account profiles
||Denial of service (DoS) attacks; Disclosure of sensitive system level details; Elevation of privilege.
||Sniffing of confidential data off the network; Stealing users’ credentials or session information
|Message replay detection
||Replaying user messages to gain unauthorized access to resources or data
||Tampering with messages on the network without detection. Failure to mutually authenticate allows attacker to send messages as if they were a legitimate user.
||Messages containing malicious input.; Cross-site scripting or SQL injection attacks on the service or clients that rely on the service.
||Confidential information disclosure and data tampering.
||Session hijacking and/or identity spoofing due to Capture of session ID.
During the application design phase, you should review your corporate security policies and procedures together with the infrastructure on which your application is to be deployed. Frequently, the target environment is rigid, and your application design must
reflect its restrictions. Sometimes design tradeoffs are required; for example, because of protocol or port restrictions or specific deployment topologies. Identify constraints early in the design phase in order to avoid surprises later, and involve members
of the network and infrastructure teams to help with this process.
Consider the following guidelines before deploying your Web service:
- Identify security policies and procedures. A security policy determines what your applications are allowed to do and what the users of the application are permitted to do. More importantly, a security policy defines restrictions to determine what
applications and users are not allowed to do. When designing your applications, identify and work within the framework defined by your corporate security policy to make sure you do not breach any policy that might prevent the application from being deployed.
- Understand network infrastructure components. Make sure you understand the network structure provided by your target environment, as well as the baseline security requirements of the network in terms of filtering rules, port restrictions, supported
protocols, and so on.
- Identify how firewalls and firewall policies are likely to affect your application’s design and deployment. If present, firewalls separating the Internet-facing applications from the internal network, as well asadditional firewalls in front of the
database, can affect your possible communication ports and consequently authentication options from the Web server to remote application and database servers. For example, Windows authentication requires additional ports.
- Identify protocols, ports, and services. At the design stage, consider what protocols, ports, and services are allowed to access internal resources from the Web servers in the perimeter network. Also identify the protocols and ports that the application
design requires, and analyze the potential threats that can occur from opening new ports or using new protocols.
- Communicate assumptions. Communicate and record any assumptions made about network and application-layer security and which component will handle what task. This prevents security controls from being overlooked when both the development and network
teams assume that the other team is addressing the issue. Pay attention to the security defenses that your application relies on the network to provide. Consider the implications of a change in network configuration. For example, how much security would you
lose if you implement a specific network change?
- Analyze deployment topologies. Your application’s deployment topology, and whether you have a remote application tier, are key considerations that must be incorporated into your design. If you have a remote application tier, you need to consider
how to secure the network between servers in order to address the network eavesdropping threat and provide privacy and integrity for sensitive data.
- Consider identity flow. Also consider identity flow and identify the accounts that will be used for network authentication when your application connects to remote servers. A common approach is to use a least-privileged process account and create
a duplicate (mirrored) account on the remote server with the same password. Alternatively, you might use a domain process account, which provides easier administration but is more problematic to secure because of the difficulty of limiting the account’s use
throughout the network. An intervening firewall or separate domains without trust relationships often makes the local account approach the only viable option.
- Understand intranet, extranet, and Internet considerations. Intranet, extranet, and Internet application scenarios each present design challenges. Questions that you should consider include: How will you flow caller identity through multiple application
tiers to back-end resources? Where will you perform authentication? Can you trust authentication at the front end and then use a trusted connection to access back-end resources? In extranet scenarios, you also must consider whether you trust partner accounts.
For more information, see “Perimeter Service Router” at
Auditing and Logging
Auditing and logging are used to monitor and record important activities, such as transactions or user management events, on both the client and the service. Ensure that your logging design allows for the effective auditing of security-critical operations such
as user management events or important business operations such as financial transactions. Be careful not to log sensitive information because the access rights to your log files may be different from access rights to protected operations in your service.
Protect your log files so that an attacker cannot access or tamper with your logs.
Consider the following guidelines:
- Audit and log access across application tiers.
- Back up and analyze log files regularly.
- Consider identity flow.**
- Do not log sensitive information.
- Instrument for significant business operations.
- Instrument for unusual activity.
- Instrument for user management events.
- Know your baseline.
- Log key events.
- Protect and audit log files.
- Use log throttling.
Each of these guidelines is briefly described in the following sections.
Audit and Log Access Across Application Tiers
Audit and log access across the tiers of your application for the purpose of non-repudiation. Use a combination of application-level logging and platform auditing features.
Back Up and Analyze Log Files Regularly
There is no point in logging activity if the log files are never analyzed. Log files should be removed from production servers on a regular basis. The frequency of removal depends on your application’s level of activity. Your design should consider the way
that log files will be retrieved and moved to offline servers for analysis. Any additional protocols and ports opened on the Web server for this purpose must be securely locked down.
Consider Identity Flow
Consider how your application will flow caller identity across multiple application tiers. You have two basic choices:
- You can flow the caller’s identity at the operating system level by using the Kerberos protocol delegation. This allows you to use operating system–level auditing. The drawback to this approach is that it affects scalability because it means there can be
no effective database connection pooling at the middle tier.
- Alternatively, you can flow the caller’s identity at the application level and use trusted identities to access back-end resources. With this approach, you have to trust the middle tier, which brings a potential repudiation risk. You should generate audit
trails in the middle tier that can be correlated with back-end audit trails.
Do Not Log Sensitive Information
Do not include sensitive information in your log entries. The access rights for your log files may be different than the access rights for sensitive operations and data in your service. Strip out sensitive data such as passwords, credit card numbers, or personally
identifiable information (PII) before logging an error or an event to your log files.
Instrument for Significant Business Operations
Track significant business operations. For example, instrument your application to record access to particularly sensitive methods and business logic.
Instrument for Unusual Activity
Instrument your application and monitor events that might indicate unusual or suspicious activity. This enables you to detect and react to potential problems as early as possible. Unusual activity might be indicated by:
- Replays of old authentication tickets.
- Too many login attempts over a specific period of time.
Instrument for User Management Events
Instrument your application and monitor user management events such as password resets, password changes, account lockout, user registration, and authentication events. Doing this helps you to detect and react to potentially suspicious behavior. It also enables
you to gather operations data; for example, to track who is accessing your application and when user account passwords need to be reset.
Know Your Baseline
Before deploying your application, audit your log files so you know what normal application behavior looks like. Knowing your baseline can help you identify an attack in progress early on and limit damage to your system.
Log Key Events
The types of events that should be logged include successful and failed logon attempts, modification of data, retrieval of data, network communications, and administrative functions such as the enabling or disabling of logging. Logs should include the time
of the event, the location of the event (including the machine name), the identity of the current user, the identity of the process initiating the event, and a detailed description of the event.
Protect and Audit Log Files
Protect and audit and log files using Windows access control lists (ACLs), and restrict access to the log files. If you log events to Microsoft SQL Server® or to some custom event sink, use appropriate access controls to limit access to the event data.
For example, grant write access to the account or accounts used by your application, grant full control to administrators, and grant read-only access to operators.
This makes it more difficult for attackers to tamper with log files in order to cover their tracks. Minimize the number of individuals who can manipulate the log files. Authorize access only to highly trusted accounts such as administrators.
Also keep in mind the following additional considerations:
- Log application events on a separate, protected server. This helps to ensure that attackers cannot tamper with logs.
- Assign appropriate permissions to the log files. Logs should be written by a process with write permission only. Logs should be read by users with administrative access.
- Log application events in sufficient detail. Provide sufficient detail to permit reconstruction of system activity.
- Use performance counters for high-volume, per-request events. This helps to minimize the impact on performance.
Use Log Throttling
Use log throttling to limit the number of logs as well as the size of the log entries that a single user can generate. Log throttling can protect your application from a denial of service (DoS) attack that may overwhelm your logging infrastructure and negatively
impact the availability of your service.
Authentication is the mechanism by which your clients can establish their identity with your service using a set of credentials that prove that identity. Protect your user’s credentials when they are sent over the network, as well as when they are stored on
Consider the following guidelines:
- Be able to disable accounts.
- Do not send passwords over the wire in plaintext.
- Do not store passwords in user stores.
- Protect authentication cookies.
- Require strong passwords.
- Support password expiration periods.
- Use account lockout policies for end-user accounts.
Be Able to Disable Accounts
If the system is compromised, being able to deliberately invalidate credentials or disable accounts can prevent additional attacks.
Do Not Send Passwords over the Wire in Plaintext
Plaintext passwords sent over a network are vulnerable to eavesdropping. To address this threat, secure the communication channel; for example, by using Secure Sockets layer (SSL) to encrypt the traffic.
Do Not Store Passwords in User Stores
If you must verify passwords, it is not necessary to actually store the passwords. Instead, store a one-way hash value and then recompute the hash using the user-supplied passwords. To mitigate the threat of dictionary attacks against the user store, use strong
passwords and incorporate a random salt value with the password.
Protect Authentication Cookies
A stolen authentication cookie is a stolen logon. Protect authentication tickets using encryption and secure communication channels. Also limit the time interval in which an authentication ticket remains valid, to counter the spoofing threat that can result
from replay attacks, where an attacker captures the cookie and uses it to gain illicit access to your site. Reducing the cookie timeout does not prevent replay attacks but does limit the amount of time the attacker has to access the site using the stolen cookie.
Require Strong Passwords
Do not make it easy for attackers to crack passwords. There are many guidelines available, but a general practice is to require a minimum of eight characters and a mixture of uppercase and lowercase characters, numbers, and special characters. Whether you are
using the platform to enforce these requirements for you, or you are developing your own validation, this step is necessary to counter brute-force attacks where an attacker tries to crack a password through systematic trial and error. Use regular expressions
to help with strong password validation.
Support Password Expiration Periods
Passwords should not be static and should be changed as part of routine password maintenance through password expiration periods. Consider providing this type of facility during application design.
Use Account Lockout Policies for End-user Accounts
Disable end-user accounts or write events to a log after a set number of failed logon attempts. If you are using Windows authentication, such as NTLM or the Kerberos protocol, these policies can be configured and applied automatically by the operating system.
With Forms authentication, these policies are the responsibility of the application and must be incorporated into the application design. Be careful to ensure that account lockout policies cannot be abused in DoS attacks.
For information on key authentication patterns, see the following resources:
Authorization is the mechanism by which you control the operations and resources an authenticated client can access. Where possible, authenticate your users on the same application tier where you authorize your users. Run your application in a least-privileged
account and use impersonation to increase privileges only when necessary and for the shortest time possible. Use ACLs to restrict the system resources that your application and its users can access.
Consider the following guidelines:
- Tie authentication to authorization on the same tier.
- Consider authorization granularity.
- Know your authorization options.
- Enforce separation of privileges.
- Restrict user access to system-level resources.
- Use least-privileged accounts.
- Use multiple gatekeepers.
Tie Authentication to Authorization on the Same Tier
Where possible, authenticate your users on the same application tier where you authorize your users. The further you separate the time of check (authentication) from the time of use (authorization), the larger window of opportunity you give an attacker to subvert
your authorization mechanism.
Consider Authorization Granularity
There are three common authorization models, each with varying degrees of granularity and scalability:
- The most granular approach relies on impersonation. Resource access occurs using the security context of the caller. Windows ACLs on the secured resources (typically files or tables, or both) determine whether the caller is allowed to access the
resource. If your application provides access primarily to user-specific resources, this approach may be valid. It has the added advantage that operating system–level auditing can be performed across the tiers of your application, because the original caller’s
security context flows at the operating system level and is used for resource access. However, the approach suffers from poor application scalability because effective connection pooling for database access is not possible. As a result, this approach is most
frequently found in limited scale intranet-based applications.
- The least granular but most scalable approach uses the application’s process identity for resource access. This model is referred to as the
trusted subsystem or sometimes as the trusted server model. Although this approach supports database connection pooling, it means that the permissions granted to the application’s identity in the database are common, irrespective of the identity
of the original caller. The primary authorization is performed in the application’s logical middle tier using roles, which group together users who share the same privileges in the application. Access to classes and methods is restricted based on the role
membership of the caller. To support the retrieval of per-user data, a common approach is to include an identity column in the database tables and to use query parameters to restrict the retrieved data. For example, you may pass the original caller's identity
to the database at the application (not operating system) level through stored procedure parameters.
- The third option is to use a limited set of identities for resource access based on the role membership of the caller. This is really a hybrid of the two models described earlier. Callers are mapped to roles in the application’s logical middle tier,
and access to classes and methods is restricted based on role membership. Downstream resource access is performed using a restricted set of identities determined by the current caller’s role membership.
Know Your Authorization Options
Know your authorization options and choose the most appropriate one for your scenario. First decide if you want to use resource-based or role-based authorization. Resource-based authorization uses ACLs on the resource to authorize the original caller. Role-based
authorization allows you to authorize access to service operations or resources based on the group a user is in.
- If you choose to use role-based authorization, you can store your roles in Windows groups or in ASP.NET roles.
- If you are using Active Directory, consider using Windows groups based on ease of maintenance and the fact that you maintain both roles and credentials in the Active Directory store. If you are not using Active Directory, consider using ASP.NET roles and
the ASP.NET role provider.
Restrict User Access to System-level Resources
System-level resources include files, folders, registry keys, Active Directory objects, database objects, event logs, and so on. Use ACLs to restrict which users can access what resources and the types of operations that they can perform. Pay particular attention
to anonymous Internet user accounts; lock these down with ACLs on resources that explicitly deny access to anonymous users.
Use Least-privileged Accounts
You might need to create a custom service account to isolate your application from other applications on the same server, or to be able to audit each application separately.
Use Multiple Gatekeepers
On the server side, you can use IP Security Protocol (IPSec) policies to provide host restrictions to restrict server-to-server communication. For example, an IPSec policy might restrict any host apart from a nominated Web server from connecting to a database
server. Internet Information Services (IIS) provides Web permissions and Internet Protocol/ Domain Name System (IP/DNS) restrictions. IIS Web permissions apply to all resources requested over HTTP regardless of the user. The permissions do not provide protection
if an attacker manages to log on to the server. For this, NTFS permissions allow you to specify per-user ACLs. Finally, ASP.NET provides URL authorization and File authorization together with principal permission demands. By combining these gatekeepers, you
can develop an effective authorization strategy.
For more information, see “Trusted Subsystem” at
Security settings, authentication, authorization, logging, and other parameters can be set in configuration files. Encrypt configuration sections that contain sensitive data such as connection strings to your SQL database. Protect access to your configuration
settings so that an attacker cannot modify security settings for your service.
Consider the following guidelines:
- Consider your key storage location.
- Encrypt sensitive sections of configuration files.
- Use ACLs to protect your configuration files.
- Use secure settings for various operations of Web services.
Consider Your Key Storage Location
If you need to store keys, choose platform features over rolling your own mechanism. The Data Protection API (DPAPI)– and RSA-protected configuration providers used to encrypt sensitive data in configuration files can use either machine stores or user stores
for key storage. You can either store the key in the machine store and create an ACL for your specific application identity, or store the key in a user store. In the latter case, you need to load the user account’s profile to access the key.
Use machine-level key storage when:
- Your application runs on its own dedicated server with no other applications.
- You have multiple applications on the same server that run using the same identity, and you want those applications to be able to share sensitive information and the same encryption key.
Use user-level key storage if you run your application in a shared hosting environment and you want to ensure that your application’s sensitive data is not accessible to other applications on the server. In this scenario, each application should have a separate
identity so that they all have their own individual and private key stores.
Encrypt Sensitive Sections of Configuration Files
Configuration files may contain sensitive information, such as connection strings to your database. Encrypt sensitive information in your configuration files using the DPAPI provider with the machine-key store. You can use the aspnet_regiis command-line tool
to encrypt sections of your configuration file.
Use ACLs to Protect Your Configuration Files
Use ACLs to lock your configuration files down and restrict inappropriate access. Modifications to your configuration settings, especially binding options, can have a major impact on the security of your service.
Use Secure Settings for Various Operations of Web Services
Set your configuration options to take advantage of features such as message and transport security, which protect the communication channel between your client and your service.
Exception management is the means by which you expose and consume exception information within your service and send it back to your clients. Be careful not to reveal internal application details to your clients as this information could assist an attacker
trying to exploit your service. Catch and handle exceptions so that error conditions do not lead to a service crash and a DoS condition for your clients. Fail to a secure state so that an error condition does not result in your application running at higher
privilege or accessing resources insecurely.
Consider the following guidelines:
- Catch exceptions.
- Do not log private data such as passwords.
- Do not reveal sensitive system or application information.
- Log detailed error messages.
Use structured exception handling and catch exception conditions with try/catch blocks. Doing so avoids leaving your application in an inconsistent state that may lead to information disclosure. It also helps protect your application from DoS attacks. Decide
how to propagate exceptions internally in your application and give special consideration to what occurs at the application boundary. Catch and wrap exceptions only where it adds value or will provide additional information relevant to the exception.
Do Not Log Private Data Such as Passwords
Exception handlers often will result in an error log entry. Be careful not to log sensitive information such as passwords, credit card numbers, or privately identifiable information (PII). This information may make it easier to decipher error logs; however,
sensitive data is not secure in log files and could be accessed by users who would not normally have access to this information.
Do Not Reveal Sensitive System or Application Information
In the event of a failure, do not expose information that could lead to information disclosure. For example, do not expose stack trace details that include function names and line numbers in the case of debug builds (which should not be used on production servers).
Instead, return generic error messages to the client.
Log Detailed Error Messages
Send detailed error messages to the error log. Send minimal information to the consumer of your service or application, such as a generic error message and custom error log ID that subsequently can be mapped to a detailed message in the event logs. Make sure
that you do not log passwords or other sensitive data.
For more information on how to handle exceptions, see “Exception Shielding” at
Message protection covers the mechanisms used to protect sensitive data in-transport over the network from unauthorized access or modification. Use message or transport security to protect your messages in transit. Do not try to create your own cryptographic
routines; use the platform-provided cryptography instead.
Consider the following guidelines:
- Use message security or transport security to encrypt and sign your messages.
- Use platform-provided cryptography.
- Use platform features for key management.
- Periodically change your keys.
Use Message Security or Transport Security to Encrypt and Sign Your Messages
Use message security or transport security to encrypt your messages on the network. Message security encrypts each individual message to protect sensitive data. Transport security secures the end-to-end network connection to protect the network traffic. Message
encryption protects the contents of your message from being stolen and read. Message signing protects the integrity of your message and guarantees the authenticity of the sender.
Use Platform-Provided Cryptography
Cryptography is notoriously difficult to develop. The Windows crypto APIs have been proven to be effective. These APIs are implementations of algorithms derived from years of academic research and study. Some developers believe that a less well-known algorithm
can provide more security, but this is not true. Cryptographic algorithms are mathematically proven; therefore, the more scrutiny they receive, the better. An obscure algorithm will not protect a flawed cryptographic implementation from a determined attacker.
- For hashing, use SHA1. For integrity checking, use HMACSHA1 or a digital signature mechanism.
- Consider using the XMLEncryption mechanisms when you need to encrypt different parts of a document under different keys, or if you only want to encrypt small sections of a document.
- Use X.509 and S/MIME encryption if you are using an internal or external public key infrastructure (PKI) based on digital certificates.
Use Platform Features for Key Management
Use platform features where possible to avoid managing keys yourself. For example, by using DPAPI, the encryption key is derived from an account’s password, so Windows handles this for you.
Periodically Change Your Keys
You should change your encryption keys from time to time because a static secret is more likely to be discovered over time. Did you write it down somewhere? Did the administrator with access to the secrets change positions in your company or leave the company?
Are you using the same session key to encrypt communication for a long time? Also, do not overuse keys.
For more information, see the following resources:
Message validation is used to protect your service from malformed messages and message parameters. Message schemas can be used to validate incoming messages, and custom validators can be used to validate parameter data before your service consumes it. Do not
trust input from any source that the client can influence, such as cookies, headers, IP address, or the content of messages sent to your service. Do trust input from a database, the file system, or anything else outside the trust boundary of your service.
Use message schemas and data validators to check for format, range, length, and type. Do not rely on client-side validation; make all security decisions based on server-side validation.
Consider the following guidelines:
- Do not trust input.
- Verify the message payload against a schema.
- Verify the message size, content, and character sets.
- Filter, scrub, and reject input and output before additional processing.
Do Not Trust Input
An attacker passing malicious input can attempt SQL injection, cross-site scripting, and other injection attacks that aim to exploit your application’s vulnerabilities. Check for known good data and constrain input by validating it for type, length, format,
Verify the Message Payload Against a Schema
If you need to validate parameters, message contracts, or data contracts passed to operations, use schemas to validate the incoming message. Schemas provide a wide range of input validation without the need for custom code or validation routines.
Verify the Message Size, Content, and Character Sets
Validate incoming messages to ensure that they match your expectations regarding size, content, and character encoding. If a message is much larger than expected or contains encoding other than what your service expects, you may be under attack. Reject the
message and log the occurrence so that auditing can determine if an attack is underway.
Filter, Scrub, and Reject Input and Output Before Additional Processing
Filter and reject input before allowing the data to be processed by downstream components. Because malicious input may target the routines that process your input, it is important to detect and reject malformed input early before additional processing occurs.
Scrub your output before sending to the client as it may include potentially dangerous input from sources such as the file system or your database that is outside of your service trust boundary.
For more information, see “Message Validator” at
Sensitive data refers to confidential information that your service processes, transmits, or stores. Protect sensitive data on the network, in configuration files, in local memory or file storage, and in databases and log files. Ensure that you are aware of
all sensitive information your service transmits or processes. Sensitive data includes user identity and credentials as well as any personally identifiable information (PII) such as social security number.
Consider the following guidelines:
- Do not store database connections, passwords, or keys in plaintext.
- Do not store secrets if you can avoid it.
- Do not store secrets in code.
- Encrypt sensitive data in configuration files.
- Encrypt sensitive data over the network.
- Retrieve sensitive data on demand.
Do Not Store Database Connections, Passwords, or Keys in Plaintext
Avoid storing secrets such as database connection strings, passwords, and keys in plaintext. Use encryption and store encrypted strings.
Do Not Store Secrets if You Can Avoid It
Storing secrets in software in a completely secure fashion is not possible. An administrator, who has physical access to the server, can access the data. For example, it is not necessary to store a secret when all you need to do is verify whether a user knows
the secret. In this case, you can store a hash value that represents the secret and compute the hash using the user-supplied value to verify whether the user knows the secret.
Do Not Store Secrets in Code
Do not hard-code secrets in code. Even if the source code is not exposed on the Web server, it is possible to extract string constants from compiled executable files. A configuration vulnerability may allow an attacker to retrieve the executable.
Encrypt Sensitive Data in Configuration Files
Configuration files may contain sensitive information, such as connection strings to your database. Encrypt sensitive information in your configuration files by using the DPAPI provider with the machine-key store. You can use the aspnet_regiis command-line
tool to encrypt sections of your configuration file.
Encrypt Sensitive Data over the Network
Consider where items of sensitive data, such as credentials and application-specific data, are transmitted over a network link. If you need to send sensitive data between the Web server and browser, consider using SSL. If you need to protect server-to-server
communication, such as between your Web server and database, consider IPSec or SSL.
Retrieve Sensitive Data on Demand
The preferred approach is to retrieve sensitive data on demand when it is needed instead of persisting or caching it in memory. For example, retrieve the encrypted secret when it is needed, decrypt it, use it, and then clear the memory (variable) used to hold
the plaintext secret.
Sessions are the means by which an application maintains stateful communication with a client over time. Protect your session tokens or identifiers so that an attacker cannot gain access and steal a user’s session. Reduce the timeouts on your sessions to lower
the chances of an attacker being able to steal a session after a user has finished using your application.
Consider the following guidelines:
- Authenticate and authorize access to the session store.
- Avoid storing sensitive data in session stores.
- Reduce session timeouts.
- Secure the channel to the session store.
Authenticate and Authorize Access to the Session Store
Authenticate and authorize access to your session store. The session store contains identifiers that maintain session state for your users. This information can be used by an attacker to hijack user sessions and take actions on their behalf. Use authentication
and authorization to restrict direct access to this store so that only your service can get at the information stored within.
Avoid Storing Sensitive Data in Session Stores
Avoid storing sensitive information, such as user credentials, in your session store. The permissions required to access your session store may be different than the permissions necessary to access sensitive data or operations in your service. The session store
should contain the bare minimum of information to track a session ID and maintain the session state for your users.
Reduce Session Timeouts
Reduce the lifetime of sessions to mitigate the risk of session hijacking and replay attacks. The shorter the session, the less time an attacker has to capture a session cookie and use it to access your application.
Secure the Channel to the Session Store
Consider how session state is to be stored. For optimum performance, you can store session state in the Web application’s process address space. However, this approach has limited scalability and implications in Web farm scenarios, where requests from the same
user cannot be guaranteed to be handled by the same server. In this scenario, an out-of-process state store on a dedicated state server or a persistent state store in a shared database is required. You should secure the network link from the Web application
to state store by using IPSec or SSL to mitigate the risk of eavesdropping.