What can the data be used for?
Enabling all kinds of use cases, e.g.
Measuring the success of marketing efforts to identify what drives conversions allows funds to be allocated efficiently. Recent developments around more restrictive browsers and privacy laws make it increasingly difficult to gather the necessary data. Consent-aware cohort-based or completely anonymous tracking can provide a full picture.
The more a company knows about their customers on an individual level, the better it can tailor products and services to their needs to drive revenues and retention. Collecting all customer interactions in a central place, for example in a customer data platform, is a tedious task but can provide a lot of insights into indivudal customer journeys.
ML and artificial intelligence are used for all kinds of advanced applications, e.g. content recommendations, pricing strategies, loyalty and retention programs, and forecasting. In order for ML/AI models to produce good results, they need to be fed high-quality data, because especially for algorithms the reality is: Garbage in, garbage out.
Product managers need usage data from their products, ideally blended with marketing data. Their big advantage over others is that they usually only need cohort-based data to identify behavioral patterns. If done right, it is easy to comply with the increasingly restrictive privacy laws, even when not having the user's consent to collect PII.
Any optimization effort requires data. When aiming to maximize conversion rates, minimize churn, increase average cart values or prices for individual products and services, historical data usually establishes a baseline. When testing hypotheses or running a/b/n tests, additional data needs to be properly collected to capture the respective impact.
Not violating rather strict privacy laws like CCPA, GDPR, and PIPEDA, becomes increasingly difficult. But by using techniques like pseudonymization and anonymization, it is still possible to provide high-quality user data. If the user doesn't consent to the collection of PII, tracking on a cohort-basis allows for most use cases. Integration with CMPs possible.
Why focus so much on implementation?
Data quality independent from tools
Adobe, Google, Mixpanel, Snowplow? They all use the same technology.
A new solution implemented like a prior one won't produce better data.
Data quality issues are almost never caused by the analytics tool itself or its libraries. While certain solutions are certainly better than others, they are all very close together. However, it's the approach to the implementation, that makes all the difference.
The only solution: Focus on a high-quality analytics tool implementation!
Since different analytics tools won't make a substantial difference, you should focus on a great implementation, no matter the tool. Unfortunately, switchting the analytics tool won't solve the data issues, if the new implementation is as bad as the old one.
How can the data support multiple use cases?
Modelling holistic, multipurpose data
Even when event-emitting applications are the same, the expected derived analytics data can be very different based on the specific use case somebody is working on. This often results in one-sided data models and a requirement for one implementation per use case. Our service is different in that it is generic by design to support all kinds of data users. Most of the time we work with these three groups:
Data scientists usually have to spend a lot of their time cleaning and transforming data because machine learning models and artificial intelligence need accurate and logical data to produce good results: Garbage in, garbage out.
Digital marketers are struggling with increasingly bad data quality due to stricter data privacy regulations and web browsers and mobile apps not sharing as much information as they used to: Anonymized / cohorts tracking to the rescue.
Product managers have to combine many different data sources to get a full picture of the entire customer journey. The most challenging disconnect is often between marketing and product usage data, which requires smart identity resolution.
Where can we send the data?
Constantly adding more destinations
- Google Analytics (UA & GA4)
- Adobe Analytics
- Snowplow Analytics
- Google Ads
- Facebook Ads
- Commission Junction
- In-house database
- Server-side Google Tag Manager
- Server-side Adobe Launch
- Server-side Tealium
- Google BigQuery
- AWS Redshift
- Google Pub/Sub
- AWS Kinesis
- Apache Kafka
Why let us collect the data for you?
Building on 10 years of experience
Founder has been doing this since 2012
The founder of Capely started his career in 2012 working on the roll-out of Adobe Analytics at one of Germany's largest media companies. Since then, he has worked on numerous analytics projects in North America and Europe, always trying to make the next implementation better than the previous one.
German traits: Meticulous and dogmatic
An unparalleled attention to detail is part of our company culture, just like a tendency to over-engineer. But don't worry, customers are paying for the data, not the effort that goes into the implementations. Similar to the web framework Ruby on Rails, we believe that there are certain ways to do things.
Not cutting corners & long-term oriented
When you have worked in the data space long enough, you know that there a no short-cuts. Collecting reliable, accurate behavioral data from a multitude of applications using a variety of technologies is an extremely tedious undertaking. But it's worth to do it right, because there is simply no alternative.
Where do we collect data?
Supporting most technology stacks
HTML rendered on the server-side, usually by a CMS like Wordpress, Drupal, Magnolia, etc.
Single-page applications and progressive web applications using frameworks like React, Angular, Vue, Next, etc.
Native iOS and Android applications written in languages like Swift, Objective-C, Kotlin, and Java.
Content delivery networks and API gateways, e.g. Cloudflare, Fastly, Akamai, Cloudfront, etc.
REST APIs and webhooks that can be queried from other client-side and server-side applications.
Utilizing an existing legacy implementation. Disclaimer: No data quality improvements possible.
Save ~50% on implementation
Quality data collection as a service
You let us take care of the implementation
We align with your IT directly on a scalable, maintainable architecture following best practices and industry standards. We define the data schemas and make sure the data is accurate and can be used for all kinds of use cases so that you can focus on the fun part: Using the data to build something.
We provide you with accurate, multipurpose data
We handle all the tedious tasks around data design and collection. Using our experience with all kinds of use cases, we implement everything in a generic, event-driven way. This also means that we don't model the data for specific use cases. It will be one-size-fits-all by design.
A dream come true: Better results for less money
By standardizing and automating our implementation process, we are able to provide you with accurate data for roughly 50% of the usual cost. Using the same approach across different clients means that we can't take all clients on. If we can help you, depends primarily on your technology stack.