Cape.ly

Cape.ly

What can the data be used for?

Enabling all kinds of use cases, e.g.

Marketing analytics

Measuring the success of marketing efforts to identify what drives conversions allows funds to be allocated efficiently. Recent developments around more restrictive browsers and privacy laws make it increasingly difficult to gather the necessary data. Consent-aware cohort-based or completely anonymous tracking can provide a full picture.

One 360° customer view

The more a company knows about their customers on an individual level, the better it can tailor products and services to their needs to drive revenues and retention. Collecting all customer interactions in a central place, for example in a customer data platform, is a tedious task but can provide a lot of insights into indivudal customer journeys.

Machine learning & AI

ML and artificial intelligence are used for all kinds of advanced applications, e.g. content recommendations, pricing strategies, loyalty and retention programs, and forecasting. In order for ML/AI models to produce good results, they need to be fed high-quality data, because especially for algorithms the reality is: Garbage in, garbage out.

Product analytics

Product managers need usage data from their products, ideally blended with marketing data. Their big advantage over others is that they usually only need cohort-based data to identify behavioral patterns. If done right, it is easy to comply with the increasingly restrictive privacy laws, even when not having the user's consent to collect PII.

Optimization & testing

Any optimization effort requires data. When aiming to maximize conversion rates, minimize churn, increase average cart values or prices for individual products and services, historical data usually establishes a baseline. When testing hypotheses or running a/b/n tests, additional data needs to be properly collected to capture the respective impact.

Privacy compliance

Not violating rather strict privacy laws like CCPA, GDPR, and PIPEDA, becomes increasingly difficult. But by using techniques like pseudonymization and anonymization, it is still possible to provide high-quality user data. If the user doesn't consent to the collection of PII, tracking on a cohort-basis allows for most use cases. Integration with CMPs possible.

Why focus so much on implementation?

Data quality independent from tools

  • Analytics tools like Adobe Analytics, Google Analytics, Snowplow Analytics, etc. all use the same technologies (primarily JavaScript, cookies, and localStorage). We know all the leading analytics solutions inside out, and we have yet to come across a feature that really sets an analytics provider apart from the rest. The crucial part that always defines the data quality is the quality of the implementation, meaning how well the particular analytics tool is implemented technically. The same tool can generate very poor data, and with a good implementation produce phenominal data.

  • 1

    Adobe, Google, Mixpanel, Snowplow? They all use the same technology.

    Event data is usually collected using tracking libraries, e.g. JavaScript files that get loaded into websites, or included and shipped with mobile applications as native dependencies. While these libraries have minor different features, they are all using the same base technologies.

  • 2

    A new solution implemented like a prior one won't produce better data.

    Data quality issues are almost never caused by the analytics tool itself or its libraries. While certain solutions are certainly better than others, they are all very close together. However, it's the approach to the implementation, that makes all the difference.

  • 3

    The only solution: Focus on a high-quality analytics tool implementation!

    Since different analytics tools won't make a substantial difference, you should focus on a great implementation, no matter the tool. Unfortunately, switchting the analytics tool won't solve the data issues, if the new implementation is as bad as the old one.

Leave the tedious tasks to us

Focus on the important things

What we do for you:

  • Implementation
  • Work with your IT
  • Data quality

What is configured via UI:

  • Data mapping
  • Data transformation
  • Business logic

What you can focus on:

  • Data requirements
  • Using the data
  • Business value

How can the data support multiple use cases?

Modelling holistic, multipurpose data

  • Even when event-emitting applications are the same, the expected derived analytics data can be very different based on the specific use case somebody is working on. This often results in one-sided data models and a requirement for one implementation per use case. Our service is different in that it is generic by design to support all kinds of data users. Most of the time we work with these three groups:

  • 1

    Data scientists

    Data scientists usually have to spend a lot of their time cleaning and transforming data because machine learning models and artificial intelligence need accurate and logical data to produce good results: Garbage in, garbage out.

  • 2

    Digital marketers

    Digital marketers are struggling with increasingly bad data quality due to stricter data privacy regulations and web browsers and mobile apps not sharing as much information as they used to: Anonymized / cohorts tracking to the rescue.

  • 3

    Product managers

    Product managers have to combine many different data sources to get a full picture of the entire customer journey. The most challenging disconnect is often between marketing and product usage data, which requires smart identity resolution.

Where can we send the data?

Constantly adding more destinations

Analytics solutions

  • e.g.
  • Google Analytics (UA & GA4)
  • Adobe Analytics
  • Snowplow Analytics

Marketing platforms

  • e.g.
  • Google Ads
  • Facebook Ads
  • Commission Junction

Customer data platforms

  • e.g.
  • Segment
  • Rudderstack
  • In-house database

Server-side tag managers

  • e.g.
  • Server-side Google Tag Manager
  • Server-side Adobe Launch
  • Server-side Tealium

Cloud databases

  • e.g.
  • Google BigQuery
  • Elasticsearch
  • AWS Redshift

Data streams

  • e.g.
  • Google Pub/Sub
  • AWS Kinesis
  • Apache Kafka

Why let us collect the data for you?

Building on 10 years of experience

  • 1

    Founder has been doing this since 2012

    The founder of Capely started his career in 2012 working on the roll-out of Adobe Analytics at one of Germany's largest media companies. Since then, he has worked on numerous analytics projects in North America and Europe, always trying to make the next implementation better than the previous one.

  • 2

    German traits: Meticulous and dogmatic

    An unparalleled attention to detail is part of our company culture, just like a tendency to over-engineer. But don't worry, customers are paying for the data, not the effort that goes into the implementations. Similar to the web framework Ruby on Rails, we believe that there are certain ways to do things.

  • 3

    Not cutting corners & long-term oriented

    When you have worked in the data space long enough, you know that there a no short-cuts. Collecting reliable, accurate behavioral data from a multitude of applications using a variety of technologies is an extremely tedious undertaking. But it's worth to do it right, because there is simply no alternative.

Ian Scheel

Where do we collect data?

Supporting most technology stacks

Conventional websites

HTML rendered on the server-side, usually by a CMS like Wordpress, Drupal, Magnolia, etc.

SPAs & PWAs

Single-page applications and progressive web applications using frameworks like React, Angular, Vue, Next, etc.

iOS & Android apps

Native iOS and Android applications written in languages like Swift, Objective-C, Kotlin, and Java.

CDNs & API gateways

Content delivery networks and API gateways, e.g. Cloudflare, Fastly, Akamai, Cloudfront, etc.

APIs & webhooks

REST APIs and webhooks that can be queried from other client-side and server-side applications.

Piggybacking (legacy)

Utilizing an existing legacy implementation. Disclaimer: No data quality improvements possible.

Save ~50% on implementation

Quality data collection as a service

  • 1

    You let us take care of the implementation

    We align with your IT directly on a scalable, maintainable architecture following best practices and industry standards. We define the data schemas and make sure the data is accurate and can be used for all kinds of use cases so that you can focus on the fun part: Using the data to build something.

  • 2

    We provide you with accurate, multipurpose data

    We handle all the tedious tasks around data design and collection. Using our experience with all kinds of use cases, we implement everything in a generic, event-driven way. This also means that we don't model the data for specific use cases. It will be one-size-fits-all by design.

  • 3

    A dream come true: Better results for less money

    By standardizing and automating our implementation process, we are able to provide you with accurate data for roughly 50% of the usual cost. Using the same approach across different clients means that we can't take all clients on. If we can help you, depends primarily on your technology stack.