Skip to content

⚑️ The New Possibilities

πŸ’‘

Imagine if you had the opportunity to enable your product offering to natively take advantage of eye movements. Consider the immense potential of being able to track areas of interests in real-time, utilize eye movements as a direct input of navigation within your application as well as to capitalize on the raw and processed data itself.

Just give it some seconds to slip in... Once you have done that, then you will need:

A framework with embedded eye-tracking technology that, with just a few lines of code, can help shape and strengthen future products with eye-enabled interaction, analytics and tracking.

https://images.unsplash.com/photo-1522134239946-03d8c105a0ba?ixlib=rb-1.2.1&q=85&fm=jpg&crop=entropy&cs=srgb

Whether your company is making physical products, operates as a digital-only-platform or squeeze itself in between, chances are that the value-add of implementing attention tracking will greatly outweigh the few reasons for not doing it. Especially if the technology can be achieved easily through an end-to-end framework, embedded into your product.

Someone once said that the eyes are the windows into the soul… well... emerging technology is capturing this.


Static and dynamic attention tracking

When talking about the endless possibilities within attention tracking technology, it makes sense to distinguish the underlying tech in terms of the Potential use cases. There are, roughly speaking, two main areas of attention tracking:

  1. Passive: This is the static and more "traditional" one where you want to capture unbiased areas of interests. Here you want to see what the users engage with and how they digest specific content. This is an one-way street as you can not interact with the content itself. The output will be the raw coordinates in a regular X- and Y-axis chart (heatmaps). That’s it. But it still has tremendous value in many industries.

  2. Influential: This is the dynamic one. You just added a "new axis" to the chart. This is the one where you can convey and exchange information through the use of one’s eyes. This is the two-way street. Here it is possible to command an output based on what you are looking at whether it is popping a balloon in a game, making sentences with in-screen-keyboards, scrolling through an ebook, activating a remote screen, moving a cursor, or likewise. It is dynamic. It is useful. It is true product enablement.Β And it is gesture-based.


Interaction, tracking and analytics

As mentioned above, there are endless outputs of eye-enabling technology. Divided between the passive and the influential use cases out there, three different components can be achieved, either as sole inputs or in combination with one another:

  1. Interaction: Navigate with the eyes. Let your users interact and navigate your digital product hands-free, either as a primary or secondary way of navigation. This can be a complementary way of interacting with your current product as Additional navigation to enhance hands-free hygienic or it can be as the core basis of a new and more accessible version of your product to facilitate the growing market of disability. This feature will advance productivity, accessibility, robotics, offline Point-of-Sale, AR/VR, and many more areas. Do you agree? Blink twice to confirm.

  2. Tracking: See where they see. You want to track users' eye movements and behavior to help deduce (or induce) conclusions. This is relevant for most use cases whether it being psychology, neuroscience, automotive, aviation, defence, security, anthropology, freelance sites, surveillance, QA, customer feedback, edtech, gaming, recruitment processes, security Tangible research, etc.

  3. Analyzing: Digest the what, where, when and for how long. Once you have tracked the users' eye movements, it is valuable to analyze this data. This will enable builders of future products to create better solutions. Though intertwined with the tracking feature mentioned above, this "transparent, see-through"-layer that the user will not notice, will be the company's ammunition to build even better products. It goes without saying that the analytics of eye movements is tremendously valuable to areas such as diagnostics, treatments, disease progression, Design & UX / UI, heatmaps, ads, education, seniority, performance, etc.


Uncover the massive potential hidden right in front of our eyes. Don't wait for it β€” just jump in, the water is fine!

https://images.unsplash.com/photo-1464925257126-6450e871c667?ixlib=rb-1.2.1&q=85&fm=jpg&crop=entropy&cs=srgb

The use of attention tracking is not new β€” it is the very infrastructure that is new. This enhances the accessibility and availability of the technology which makes it way more affordable to uncover the massive potential hidden in plain sight. Though the infrastructure is new (currently on iOS-devices as stated in The tech stuff), it does not mean that you should wait for it β€” jump in, the water is fine!


Hardware dependency is a two-sided story

Attention tracking itself is constrained by the hardware it operates with. Having an accessible and affordable solution, is no exception. Running on iOS-devices such as an iPhone or an iPad will, for some laboratory use cases (as mentioned in Potential use cases), not have enough precision and accuracy yet (and yes, those things are not the sameEye tracking terms).

With 15.9 million iPads being sold in the fourth quarter of 2019 and 72.9 million iPhones sold in the fourth quarter of 2019, it is fair to say that the volume and accessibility of Apple's iPads and iPhones makes it possible to reach a much broader audience β€” an audience not confined by geographically boundaries as the traditional attention tracking hardware would do. With Apple's announcement of its new Magic Keyboard for iPad Pro, the possibility of conducting traditional laptop research with screen, keyboard and cursor, is now also available.

The tradeoff of having access to such a big audience from common everyday devices is that the hardware is more preferred and suitable for certain types of use cases β€” if you demand "single-pixel-hair-thin"-precision, you would probably have to continue to do what you have always been doing. Unfortunately. This is typically for research fields such a neuroscience or disease progression where the requirement of FPS (Frames Per Second) is between 300-1200 Hz which is only achieved from sole devoted (and very expensive) eye tracking hardware. For other use cases, 30-60Hz or less can easily manage the job.

If interested in learning more about the technical aspects of this attention tracking framework, go to The tech stuff.


Having more ideas that you want to explore? Feel free to Get in contact.