My first two weeks at Mozilla

In the first couple of weeks at Mozilla I spent some time with the client and server-side telemetry implementation. Telemetry allows Engineering to receive aggregate data of browser health in the field, e.g. cache hit rates or page load times across all browser instances.

In summary, the telemetry workflow looks more or less like this:

  1. Firefox generates telemetry data while its being used, if the user explicitly enabled the collection;
  2. the collected data is sent to once a day to a server via HTTPS;
  3. the received data is collected into a queue which is post-processed through a converter, which validates, compacts and compresses the data, before its being sent to persistent storage;
  4. analysis jobs are run on the data available through the persistent storage and the results are presented on the telemetry dashboard;
  5. finally, developers can access the persistent data through custom map-reduce jobs to compute user-define metrics.

I have been working on small bits of the project, i.e.:

  1. adding a feature to firefox that allows certain telemetry data to “expire”, i.e. not being sent to the server;
  2. integrating a C++ record compressor into the back end;
  3. running some map-reduce jobs to determine the number of SSDs vs HDDs.

In particular, for the last item, it seems that about 8% of nightly users are running from SSD disks, which is not as high as I initially suspected. Running is this sort of queries against telemetry datasets is very easy. A json file specifies the filter for the datasets you want to analyze and the actual analysis is implemented in a python file as a map-reduce job.