Skip to content

Latest commit

 

History

History

mweb-jan-2022-v1

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 

Mobile Web Jan 2022 V1

About the Model

The model was built and trained from around 25M random samples of LinkedIn Lite’s RUM data from the month of January 2022. Lite is LinkedIn's main mobile web server application.

Model Input

Model Input Description Some Examples
Country code ISO 3166-1 alpha-2 country code (more details) us, in, br
OS Family Name of the operating system iOS, Android, Windows
OS version Major version number of the OS 14, 8
Browser Name of browser Chrome, Safari
Browser version Major version number of the browser 14, 74
ASN number Autonomous System Number, like your Internet Service Provider “7922”

OS and browser info is extracted from the user agent using the UA Parser library. Using any other way to extract this information may work but it may lead to train-serve data skew.

Using Digital Element's Jan 2022 database, we extracted ASN numbers from IP Addresses. You may have to purchase their license to be absolutely match to what we did. However, any such accurate translation service should work. While trying out our demos, use any free website like this one or this one to obtain the ASN number for the IP Address of your interest.

Model Output

The model is trained to return the page load time class. For now, we have two classes:

  1. Less than 1300ms and
  2. Greater than or equal to 1300ms,

but we may expand them to more classes in the future based on the use cases. As shown in the ssr-mobile-web-modile-demo.pynb example, the TF predictor returns a probability distribution of these PLT classes. We could simply pick the class with the highest probability as the model’s prediction. SavedModels from Estimators section from this guide is another end to end example of this pattern.

How to use it

We currently have examples of how to use this model in Python and Node.js:

FAQs

How was the model built?

A deep neural network model is trained on historical RUM data of LinkedIn Lite. Lite is a server side rendered application whose onLoad event is used as a proxy for page load time (PLT). Standard features like browser, OS, ASN etc. as inputs and bucketed PLT as target are fed to a tf.estimator.DNNClassifier for training. The trained model is exported to disk in saved model format, which we are distributing as part of this repo. We are working on a detailed blog on how we trained 100s of models in an automated manner to find the best one on our Engineering Blog. In the meantime, you can checkout this presentation and video which contains much of the same content.