Tuesday, June 17, 2014

Possible Unknown Field Identified in Prefetch V23

I started looking for a good project where I could start working on a tool that I could use as an example for my series on what to incorporate in a good DFIR tool and show code examples in both Perl and Python along the way. I ended up on a prefetch parser that can send output to a MySQL database for centralization of multiple systems. I will get into why this could be useful later and more details on the tool later. However, some of Harlan Carvey's older posts might give you an idea:

Joachim Metz has documented the the prefetch structure very well (in fact, he updated it this month). You can see his documentation here:

One of the things that grabbed my attention recently about the prefetch is that you can find file reference numbers in it. Something that not many tools will show you. Needless to say, I think I have identified two of the unknown  fields for the metrics array entries (v23) and want to continue doing more testing. (Offset Table Pictures come from Metz's paper).


If you change Unknown1 (offset 24) to Size of 6 and Unknown 2 (offset 28) to Size of two, the fields reflect accurately a file record number and file sequence number. That is to say, if starting at offset 24 for a length of 8 bytes, it reflects that of a file reference field.

Not sure why they would be listed as empty values… so maybe a version thing, I need to look at more files.


Anyways, this example prefetch from the DC3 2009 Challenge seems to line up. I haven’t tested on others yet though. But I have found another prefetch where it does not appear to have a volume information entry but there are metric entries. So it could be helpful to have file references in the metrics and not just in the file reference array.


The File Reference array is pointed to via the Volume Information structures:
I hope this information helps.

Here is the prefetch file I am looking at: https://docs.google.com/file/d/0B0hXPgyAlcJ1TVAxcmIzYkwtNGM 

Hopefully I will finish my tool soon and get back to the series I am wanting to do.

Friday, June 13, 2014

What Makes a Great Tool in DFIR?

The other day on the Forensic Lunch we started a discussion about programming/scripting in DFIR and the movement towards common output formats and moving data between tools. This brings up part of a topic I have wanted to start blogging about for a while, so I figure I will start now. My intention is to go through items identified with code examples so others can see how to start incorporating it. Hopefully it will help students wanting to learn a language by providing a good starting point and examples of how to implement things I wish I would have known how to do when I started.

Expectations of Community Tools in DFIR

When I started studying Digital Forensics for an Associate's degree it quickly became apparent that I needed to learn a language to assist me. At that time it seemed like the majority of the community was using Perl, so that is what I started teaching myself. By the time I finished my Bachelor's degree Python was gaining traction. Now, the majority of the community seems to be using Python. For my new projects, I do enjoy using Python. However, this is not about Perl or Python. It is about what we need to start incorporating into our tools that will benefit the community. This would have been a useful topic when I started teaching myself a language as a student.

By knowing what benefits the community I could have had a baseline of what I needed to start off learning about when creating tools. I learned how to parse binary and output it as a string. Job done. But does that benefit many people?

Willi Ballenthin touches on some of these topics. See his blogs at:
http://www.williballenthin.com/blog/2014/02/07/towards-better-tools-part-1/
http://www.williballenthin.com/blog/2014/02/08/towards-better-tools-part-2/

So what do you expect from a decent tool that you will use often?

Here is what helps me:

  • Handle Images
  • Handle Unicode
  • Timezone Handling for Timestamps
  • Offsets of parsed data
  • Flexibility of Output
    • Choice of Delimiter for Text
    • Output to a SQLite 
    • Output Formats (XLSX)
  • Automation ability (CLI or evidence manager)
  • Modular
  • Scalable
Over the next couple of weeks, I would like to start showing ways of incorporating these items in python and perl to give examples of how you can incorporate these things. Please help me expand on these items and provide a references for those wanting to learn a new language and provide tools.