6

I'm testing a Web application based on SAP for a customer. One of the checks we normally do is to analyse the cookie holding the session token to make sure that it is sufficiently random and you can't predict the next valid token. We do this using Burp sequencer. On this occasion I noticed that the cookie appeared to have a large amount of static data at the beginning of it. I Base64 decoded it and found out that the first 130 characters (in text) contain the user name, customer code and date/time. I believe this is a known 'feature' of SAP.
What I am struggling to be able to explain in my report is how the results of the decoder match up to the sequencer. For example - in clear text in the cookie the static data should start at position zero and finish at character 130 - but this does not correspond to the values seen in the character analysis in the sequencer - nor does the number of characters shown in total correspond to the actual character count in the cookie. I can see the static portion in the sequencer, and even the entropy spike where the time portion changes, I just can't correlate the character positions. In addition, when I look at the bit analysis in the sequencer, the values seem to be reversed, so that instead of seeing the static value at the start of the sequence - I can see it at the end.

I hope someone can explain to me what I am seeing and many thanks for any help.

Andrei Botalov
  • 5,267
  • 10
  • 45
  • 73
Marion McCune
  • 161
  • 1
  • 3
  • In case it's of interest there's some more discussion on this on the Burp Forum here http://forum.portswigger.net/index.cgi?board=how&action=display&thread=476 – Rory McCune Apr 04 '13 at 08:22

1 Answers1

1

There is an explanation of the types of statistical tests that Burp Sequencer performs to test randomness. Below is the initial high level description and the page goes on to describe characteristics of the individual tests that are run which include character and bit analysis of the token data.

Burp Sequencer employs standard statistical tests for randomness. These are based on the principle of testing a hypothesis against a sample of evidence, and calculating the probability of the observed data occurring, assuming that the hypothesis is true:

  • The hypothesis to be tested is: that the tokens are randomly generated.
  • Each test observes specific properties of the sample that are likely to have certain characteristics if the tokens are randomly generated.
  • The probability of the observed characteristics occurring is calculated, working on the assumption that the hypothesis is true.
  • If this probability falls below a certain level (the "significance level") then the hypothesis is rejected and the tokens are deemed to be non-random.

Hope this helps some.

dudebrobro
  • 673
  • 3
  • 7