Skip to main content

Predictive Scoring

Learn how to use the Predictive Scoring feature in Factors

Updated over 2 weeks ago

Overview


Set a prediction to gauge the likelihood of an account performing a 'target event' in the future, powered by machine learning. Each account gets a prediction score. The predicted score tells the likelihood of the account performing the target event in the future, where a higher scoring account is more likely to perform the given event in the given time range.


Glossary


1. Target event: The event for which we are doing the prediction analysis - like a form submission, deal creation in CRM, or deal becoming closed won.

Required Parameters for target event - event, filters (optional but recommended), time range (default: 7 days for now).

2. Data Check: On setting the target, we run a data check to see if there are enough positive accounts (accounts that performed the target event) in the past data for model training. We check whether in the last 100 days, there should be at least 200 positive accounts, or positive accounts should be at least 0.1% out of all active accounts. If there aren't enough positives, we ask the customer to change the target event and filters which have appropriate data.

Each account's timeline is converted to a feature vector containing some shortlisted properties and events counts for the account. Each of these vectors act as a datapoint for our model.

3. Training: For a given target, the project's past 105 days of data is taken to train the model. We pick the most recent active accounts, and use their (7, 14, 21, 28, 35, 42, 49, 56, 63, 70, 77, 84, 91, 98 days) timelines to create datapoints and train our model, which is to say, we learn the underlying patterns from these timelines, then using these patterns.

We assign a score to each account that has been active in the last 100 days and hasn't performed the 'target event' in the last 100 days, based on their own timelines.

4. Testing: For testing, we pick a previous week for the project (not used in training), where the positives are known. We mimic the prediction flow for that week, and get a score for each test account. Then, we rank the test accounts based on the score, and based on the accounts which turned out to be positive for that week, we create our test results.

5. Prediction: Suppose we are predicting for dates x1-x2. We pick 100 days timeline of all accounts just before x1 (active in those 100 days and who have not performed target event in those 100 days), and convert them to feature vectors. Then, using our trained model, we input an account's feature vector and get the internal score for the account. Using our scoring mechanism, we get the final score for the account.

6. Scoring: We assign a percentile based final score where once the accounts are ranked with our internal scores, we calculate the percentile of each account and get the final score by rounding it off based on the following table.

Score

Percentile

100

95-100

90

85-95

80

75-85

70

65-75

60

55-65

50

45-55

40

35-45

30

25-35

20

15-25

10

5-15

0

0-5

How to set up this feature

  1. Under 'Settings', find the 'Account Configurations' menu item and the 'Predictive Scoring' tab.

  2. Click on 'New prediction'. Give it a name that is contextual to the prediction. Select the 'target event' that you want to compute the prediction for from the drop-down. This could be a website visit, a form submission, a company created in your CRM, or a deal becoming closed won.

  3. Run the data check to ensure we have enough data volume to create the prediction model. If this is successful, you can save the prediction rule. If unsuccessful, tweak the filters or the target event and try running the data check again.

  4. Click on 'build model' to save the prediction, it will take a couple of hours to build the model and publish the results on the model using the test data. The test data is a subset of the historic data that is not used in building the model, to test the accuracy of the model that is built. Come back to the prediction rule in 4-6 hours to view the test data results. The status of the prediction rule is updated to 'building'.

  5. Once the test results are ready, the status of the prediction rule is updated to 'ready to publish'.

    Go to the prediction rule and view the results on the test data, if you are happy with the results, 'publish' the prediction rule. If you are unhappy with the results, try setting up another prediction rule with a different set of filters and/or target event.

  6. Once you publish the rule, it will take a couple of hours for the prediction score to be created for all accounts that have not performed the target event in the last 100 days. The status is updated to 'published' once the scores are available in your Factors project, to use as an account property.

  7. Find the account property to use in Factors with the same name as the prediction rule.
    ​​

Using this feature

Once an account property is created with the prediction rule, it can be found in the properties drop-down across features.

  1. Use these scores in Factors' alerts and workflows to tell SDRs on their platform of choice (Slack, Teams, or Salesforce) which accounts are likely to book meetings in the next few days. SDRs can focus on converting these accounts as their likelihood to book meetings is high based on historic data from your project.

  2. Use it in segments to create a cohort of your ICP-fit accounts that are likely to become a deal. Help AEs prioritize accounts based on their likelihood to convert, and other parameters like their engagement, activity across platforms to improve chances of conversion.

3. These scores can also be used to create a cohort of accounts likely to submit a form or visit the website, and be used in retargeting campaigns via our LinkedIn Audience Sync or Google Audience Sync features.


4. Accounts that are likely to convert can be sent as conversion feedback to Google, through our Google Enhanced Conversions feature.

In case of any questions, feel free to reach out to us on support@factors.ai.

Did this answer your question?