Bug Disclosure: Pervasive Open Redirect in GitLab

While performing a code review of the GitLab open source codebase, I found a pervasive open redirect vulnerability affecting project pages.

The Project Application controller defines a before_action filter named redirect_git_extension. This filter attempts to detect and remove the git extension that may appear in a project request’s URI. In order to do this, it calls the Ruby on Rails redirect_to method with the original request’s params object.

Due to the requester having control over the params object, the redirect_to method can be called with arbitrary options. For a list of accepted options in the latest version of Ruby on Rails, please see:

http://api.rubyonrails.org/classes/ActionController/Redirecting.html

Impact

An attacker can supply options such as host and protocol to change the target of the redirect, thereby redirecting a user to an arbitrary domain.

There are many controllers that inherit from the Project Application controller. All actions of these controllers are potentially vulnerable due to the affected before_action filter being called. Some affected controller actions also do not require authentication, such as the Project controller’s index action.

An attacker can exploit an open redirect vulnerability in a phishing attack to trick users into trusting a malicious third-party webpage. This is because users who click on a link may not notice a redirect taking place, especially if the two domains look similar. As a result, a victim may unknowingly enter their login credentials on an attacker-controlled webpage.

Reproduction

The following reproduction demonstrates an unauthenticated user hitting the Project controller’s index action and getting redirected to an attacker-supplied domain (in this case example.com):

Recommended Fix

Rather than invoking the url_for method with a user-controllable params object, it is recommended that a modified version of the requested URI string be redirected to instead. By properly parsing the requested URI string of its extension, a version without the git extension can then be securely redirected to.

Timeline

  • 09/10/2017 – Issue submitted to GitLab via HackerOne.
  • 09/11/2017 – GitLab communicates that the issue has been reproduced and triaged.
  • 10/11/2017 – GitLab communicates that a patch is ready and will be included in an upcoming security release.
  • 10/17/2017 – GitLab publishes a security release that patches this issue. Users are advised to update.

Bug Disclosure: Remote Code Execution In BlackBerry Workspaces Server

While performing a network penetration test for one of our clients at GDS, I came across a BlackBerry Workspaces (formally WatchDox) Server. These servers can be deployed on customer networks and function as stand-alone appliances. According to BlackBerry:

BlackBerry(R) Workspaces lets you collaborate securely, with all the features you expect from an advanced enterprise file share and mobility solution. Create collaborative workspaces, share files inside and outside your organization, access your files from any device and ensure that the latest version of your file is always synced and available across all your devices.

What makes Workspaces different from competitive solutions is its file-level security. It offers 256-bit file encryption and access controls to ensure that only authorized users can access your files, even after they leave your network. Workspaces also embeds Digital Rights Management (DRM) protection into files, which means that you can control whether users are able to save, edit, copy or print the files.

I found that by issuing an HTTP request for a file inside of a particular directory, I could get a specific component of the product to return its source code.

By analyzing this source code, I was able to find a directory traversal vulnerability in unauthenticated file upload functionality. Exploiting this, I was able to then upload a web shell into another component’s webroot and obtain remote code execution. Because these kinds of servers house highly sensitive data, I’m sure you can imagine the sort of access this granted me within the client’s organization.

For more information about how I was able to exploit these vulnerabilities, check out my blog post on the GDS blog here.

Introducing the Solidity Function Profiler

Static analyzers are good at detecting certain types of security vulnerabilities. However, one place that static analysis often falls short is in the detection of authorization bugs. This is because authorization tends to be a “business logic” problem. How would an analyzer know what functionality should be off-limits to normal users? One can infer based on semantics (looking for words like “admin”), but such clear-cut cases are rare.

A couple of days ago I wrote about Parity’s multi-sig contract vulnerability. Because there was nothing inherently wrong with the vulnerable functions, aside from the lack the authorization checks, it is unlikely that a static analyzer would have flagged these issues.

If I had to take a guess at the culprit behind this vulnerability getting missed, it would probably be a lack of effective manual code review. Manual code review is a tedious, time-consuming task, but it is often the only way to find certain types of bugs. In this particular case, a human looking at a list of the contract’s functions would have hopefully noticed several suspicious looking public functions.

As far as I know, there are no public tools for Solidity to profile a contract’s functions. That is why today I would like to release a tool called the Solidity Function Profiler.

The tool uses ConsenSys’ solidity parser library to generate an AST of the contract being analyzed. It then “walks” the AST and finds function declarations, taking note of what each function’s signature, visibility, return values, and modifiers are. Finally, it returns a human-consumable report. Being able to quickly gather this kind of information about a contract is very useful in understanding how it can be interacted with. My hope is that it will help prevent future vulnerabilities like the one exploited in the multi-sig contract attack.

You can find the tool here.

Parity Multi-Sig Contract Vulnerability

So this just happened. It’s late, but before heading to bed I wanted to quickly write-up a technical analysis of this one because it’s quite short.

One of the quickest ways to understand a vulnerability is to look at its patch if one is available. Let’s do just that. There are actually a couple of now closed pull requests related to the fix, but the very first one tells us the story behind this vulnerability. The diff can be found in Parity’s GitHub repository here.

For readers who aren’t familiar with the Solidity language, the added keywords at the end of the first two differing functions are visibility specifiers. Visibility specifiers dictate who is allowed to call specific functions, just as they do in other programming languages. Sometimes functions simply aren’t fit for public use, either because of security reasons or API design.

What’s does the internal visibility do that got added in the pull request? Consulting the Solidity docs, we find:

internal:
Those functions and state variables can only be accessed internally (i.e. from within the current contract or contracts deriving from it), without using this.

The crux of this vulnerability is that several privileged contract functions were left public.

Update: Another detail is that calls to the main contract were delegated to the vulnerable contract which acted as a library, making this issue a little bit harder to see. I would argue not by much though, especially considering that the main contract simply takes incoming calls and delegatecalls to the vulnerable library contract. The design is hard to miss.

The result was that anybody that knew the address of a vulnerable contract could call these functions and change the configurations of these contracts, including the list of contract owner addresses.

You may be wondering how these privileged functions were made public in the first place. The answer is actually in a lack of code. You see, unless otherwise specified, the visibility of a Solidity function is public.

When I started learning Solidity and came across this detail, I was surprised. Contract developers should be explicit about what functions are allowed to be called externally. This is akin to writing an API and having to explicitly set every function of your application to be private, unless you want them exposed to the Internet.

Needless to say, I think that this is a bad convention for a smart contract programming language.

Analyzing the ERC20 Short Address Attack

Back in April of 2017, the Golem Project published a blog post about the discovery a security bug affecting some exchanges such as Poloniex. According to the post, when certain exchanges processed transactions of ERC20 tokens, input validation was not being performed on account address length. The result was malformed input data being provided to the contract’s transfer function, and a subsequent underflow condition that manipulated the amount being sent. The impact was that an attacker could potentially rob an exchange account of tokens.

The attack explained by the Golem Project exemplifies a rather unique case, in which an exchange acts as both a client and a server. That is, the exchange is a server for users to buy tokens as well as a client to the Ethereum network. This differs from typical contract interaction in which a client uses the Ethereum network directly, and any transaction error would likely be the sole fault of the client and not a third-party. Luckily for the Golem Project, the vulnerability is not known to have ever been exploited. It has since been dubbed the “ERC20 short address attack.” (more…)

Running Your Own Private Ethereum Network

If you’re looking to get your feet wet in Ethereum or test out a new contract that you’re developing, you may choose to run your own private network. This can be done rather than using one of Ethereum’s public testnets. By running your own private network, you can maintain total control over the network and create specific test conditions that you may find useful. You also don’t risk having others discover your new contract before you’re ready to announce it to the world.

Setting up and running your own private network is relatively easy. I present two popular options, each with their own pros and cons:

  • Geth is a popular fully fledged client and is able to do this out of the box. Setup is required, but it’s fairly straight forward.
  • TestRPC simulates an in-memory blockchain and provides a HTTP RPC server. It is extremely fast and easy to setup and tear down. However, as of today TestRPC does not implement every Ethereum API. These limitations are apparent, for example, when trying to use Mist to send personal transactions (see bug report here).

(more…)

Google Account Security and Number Portability

By now, you may have read this story about someone having $8,000 worth of Bitcoin stolen due to a social engineering attack on their Verizon account. This was an unfortunate event and an urgent reminder that SMS-based 2FA isn’t secure. When you allow a second factor of authentication to occur over SMS, the proof isn’t that you have your phone. Rather, it’s that you are able to receive SMS messages sent to a certain number. The problem with this as a means of authentication can be summed up in two words: number portability. If an attacker can social engineer your mobile provider, they can port your number over to their own account and your 2FA provider would never know the difference.

This got me thinking. How secure is my Google account, even when locked down with 2FA via the Google Authenticator application? Would I be able to withstand an attack similar to the one that Cody Brown suffered?

As it turned out, I wouldn’t. A serious security concern appeared when I went through the account recovery flow for my Google account. The following events illustrate this:

  1. Start the login process on accounts.google.com by entering my username. Click “Forgot password?”
  2. Be asked to “enter the last password you remember. Click “Try a different question.”
  3. Be asked to “enter a verification code”. Click “Try a different question.”
  4. Be asked to “get a verification code by text message at: (***) ***-**-XX.Since my cell phone number appears on my business cards and is public information as far as I’m concerned, this would hardly deter an attacker. By taking advantage of number portability, an attacker could steal my number.
  5. Be asked to “confirm the phone number you provided in your security settings: (***) ***-**XX.”Since I just received a text sent to this number, I obviously know this.
  6. Answer a security question of “What is my father’s middle name?” Skipping this forced me to specify the month and year my account was created. While the first security question is terrible, the second option isn’t all that much better as there are a very limited number of possible answers.
  7. Change my password.
  8. Login to my account.

That’s right. Despite using the Google authenticator application, I was able to effectively skip it and instead opt for receiving a text and answering a lame security question.

Now to be fair, Google discontinued security questions a while ago. However, they stick around in your account until you delete them. And that’s just one symptom of the problem here: Google’s account recovery flow falls back to other forms of verification that you may not even be aware of.

I get why Google designed the account recovery workflow to be this way. For the average user, getting access restored to their account may be more important than locking out adversaries. But for those of us who beg to differ, this can have disastrous consequences.

I urge you to review your 2-Step Verification and remove “Voice or text message” as an alternative second step, as well as any legacy credentials such as security questions. Only trust cryptographically secure 2FA. To prevent accidental lockout, store your 2FA recovery codes somewhere safe.

Your future self will thank you.

OSCP Certified

On December 1st, I took the Offensive Security Certified Professional (OSCP) exam and successfully earned my certification. For those unfamiliar with OSCP, it is a hands-on training course and certification offered by Offensive Security. The content it focuses on is immense; Everything from SQL injection to writing your own remote buffer overflow exploits is covered by the course e-book and videos. There is also lengthy coverage of how to properly enumerate hosts and take inventory of an entire network.

(more…)