Ideas Lab Q2 2021 Update

Ideas Lab
Ideas Lab
Published in
11 min readJul 9, 2021

--

The worlds of sports and data analytics continue to converge — exemplified in everything from the investment from Vista Equity in Stats Perform to the $1.6M in seed funding in Quarter4’s AI sports betting platform. This means competition, at all levels, has become leaner, meaner and smarter. The use cases for AI in sports are varied — everything from personalized strength training to team-wide analytics. Human movement is being quantified more than ever, with terabytes of data generated, processed, analyzed and visualized faster than ever.

It is our mission, at Ideas Lab, to democratize this analytics and put professional-grade motion intelligence in the literal and figurative hands of everyone on the planet with a smartphone and a wifi connection.

And we’ve been busy.

Over the last several months we’ve made significant progress on both the back-end and front-end of the various technologies we’re building to deploy. We’ve improved the accuracy of our tracking, detection and pose models while enriching our repository of annotated images across baseball, yoga and other sports domains. We’ve also expanded our technical team and the physical place for that team to come together as one.

Below is a brief snapshot of where Ideas Lab has been over the last quarter and where we’re going next!

Starting with the most important part of Ideas Lab, our people:

People

Earlier this year, our brilliant CTO, Phokgoan Chioh, collaborated with an MBA course of NCCU Taiwan, “Developing Business Models of New Industrial Trends: Digital Transformation” where he discussed AI and digital transformation (while finding potential summer interns from the class).

Here is our fearless leader, Winston Yang, providing a press conference to the Minister of Science and Technology(MoST) regarding our joint announcement on our industry academic partnership between MoST and the National Taiwan University of Sport (read a bit more on this in the “Partnerships” section below).

Also, meet some of the newest faces at Ideas Lab — Jorge Román (AI Engineer), Peiting Chen (Front-end & analytics), Chihyi Tsai (Cloud and API engineering), and Divyesh Murugan (Engineer) — here gathered at our new office in Taipei.

And hey, interested in joining the team? check out some of our open positions here:

We’ve also engaged recruiting agencies to support our HR efforts, notably TalentMade, Glint and Paul Wright Group.

Ok, to turn to some of the brand-spanking technologies in development…

Product

Uploader Platform

For our platform, we just completed the Golang-compilable backend (non-techie translation: open source programming language that simplifies software development). We’ve productized this system in the form of a video analytics system which allows users to upload their own videos and begin tracking various and sport domain-specific metrics. Individual metrics can be viewed one at a time or across all of the metrics within a dashboard. Full share features will be added so that users can share both videos and their corresponding analytics with friends, colleagues, coaches or other professionals (for example, your physical therapist) to collaborate on more qualitative and contextual assessment. We’re additionally adding event recognition so users can simply upload the video and our system will understand what sport or movement is taking place through AI.

In the GIF above you can see a very basic warm up with our analytics focused specifically on the right arm. The graph below then tracks the arm displaying its place on each frame. Each body part can be individually tracked for a given movement and compared against a benchmark (for example, the arc of a golf swing averaged across 100 golf professionals or the proper placement of the shoulder mid-punch, etc.).

We are working now on beautifying the UI further and preparing for our beta to be released by Q1'22.

Stromotion

Stromotion allows users to analyze the movement of a specific object as it moves across the frame. The University of Pretoria defined it best:

The use of Stromotion creates a video of trajectory that reveals the evolution of an athlete’s movement, the execution thereof, technique and style over space and time…it allows the means to analyse rapid movements where the moving object is viewed as a series of static images along the moving objects trajectory.

Traditionally, creating Stromotion is a manual and time-intensive process, taking upwards of an hour or more to analyze a specific video — we can do the same in seconds without sacrificing quality and, in fact, enhancing accuracy of each frame.

Stromotion for Tennis (video on left, Stromotion on right)

Stromotion for Pole Vault (video on left, Stromotion on right)

Stromotion for Golf

The core value of Stromotion is to enable users to analyze individual body positions throughout specific moments in a specific motion. Our system further offers users the ability to choose which person within the video should be analyzed (especially useful for two- or multi-person games). This is particularly useful for games or events where the player is moving across the screen.

Once individual motions are selected, our technology can then build a biomechanics-based analysis of each movement. This could enable analysis between, say, multiple 3-meter sprints or basketball throws from across the court.

Swing tracking

In addition, our system is containerized, which means it can now be started/terminated with one-line script, which increases its usability and stability.

Going down the well-beaten Vitruvian Man path, we defined the pose keypoint schema of our own — which we’ve called “IL_Pose” — integrating this model with our existing pose models we’ve been developing.

You can see an example of our body positioning in our yoga analysis model, with numbers at each body point.

We are applying these metrics across our platform and sport domains, notably in the following sports domains:

Yoga

Yoga is a fascinating area to analyze for human pose analysis because body positions taken are sometimes not very human-shaped and so difficult for traditional human pose models to analyze — existing pose models are simply unable to detect a lot of the more advanced yoga poses because the underlying AI model was never “taught to see” that movement as human.

Over the last six months we have annotated hundreds of thousands of images of unique and unusual yoga poses from a variety of perspectives, lighting conditions and angles to make our AI models smarter, more versatile and more accurate.

The GIF below combines our analysis of both biomechanics of the body as well as the feet (which is often missed in other AI pose models).

Boxing

As hardware meets software in the boxing ring — or dedicated space at home with the likes of Lightboxer (which has raised $25M to date) and FightCamp (which recently raised a $90M round) — boxing has become smarter and more rigorous while empowering novice boxers to receive instant feedback at (what was previously) personalized trainer-like levels.

Our system is intended to expand on this innovation, providing not only numbers around punches thrown, blocked or parried but also critically the full analysis of a boxer’s biomechanics.

Owner of boxing studios could deploy this video solution and open it up to both students subscribing to the analytics (as another potential revenue channel for that boxing studio) or coaches anywhere in the world could access feeds of students boxing anywhere else in the world. Qualifying every moment of the boxing session as “good” or “bad” requires both further analysis and collaboration with the best boxers in the world. If interested in helping our boxing analysis, reach Jesse by email at jesse@ideaslab.com.

Soccer

(or Football, depending on where you’re reading this)

Our system allows our customers to understand the game from both the team and individual player perspective, zooming from a bird’s eye view to individual biomechanics of the player.

At the team level, understanding which players are moving where and the success / failure of each of their plays, requires this multi-level assessment because lacking one or the other only offers a partial view. Our efforts to “make dumb video smart” further demands the ability to take a wide panning view of the game, quantify the movement vis-a-vis each play and player, and provide that analysis in an intuitive, intelligence and engaging way. Just as we’re doing for soccer we’re also applying now in American football, rugby, lacrosse, basketball, baseball and other team-based sports. The implications for competitive analysis and sports betting are many.

The analytics would include interactive skeleton charts for keypoint analysis, together with 2D to 3D intelligence.

On the front-end, we’ve set up the ability to register for an account, upload videos and process said videos. Our beta is set to be released in early 2022. If interested in checking out our beta once launched, just email us directly at hello@ideaslab.com with the subject line: “Count me in!” (pun intended).

Place

We leased our first office in Taipei in the famous Xinyi special district, in the Nan Shan building, the third tallest building in Taiwan (and just recently completed nearly three years ago to the day). While our office is small and humble today we will be looking to expand our space as we continue to grow our Taiwan base.

Partnerships

In April, Ideas Lab and the Ministry of Science and Technology (MOST) co-sponsored a project at the National Taiwan University of Sports (NTUS) to estimate baseball ball spin using high speed cameras. This project will run until October 2021 (with a writeup on this project in Q1'22). This collaboration included the following technical aspects:

  • Angle / focal length for spin calculation
  • 3D Baseball Model Building
  • Seam extracting
  • Seam matching to spin axis / rate
  • Building 3D Convolutional Neural Network (CNN) model for calculating baseball spin axis / rate
  • System integration and evaluation

We are also in initial discussions with National Cheng Kung University (NCKU) to recreate 3D motion from 2D images and video. Our discussions are currently ongoing and we expect to implement this 2D to 3D research in 2022. Stay tuned!

Plans (Q3-Q4’21)

3D reconstruction of body pose and swing tracking

At times, 2D is not enough when understanding the nuanced motion of a golf swing or how a minor shift in the swing of a golf club or tennis racket can be the difference between winning and losing. As noted above, we are working on a full 3D depiction of motion which allows users to step inside the swing, view it from all sides, and understand how the physics of the stick hitting the ball is a function of hip placement or the subtle angularity of one’s overall body position. More to come on this in future posts.

Full analytics

Translating motion to meaning requires an eye for beauty and a mind (or, rather, a team of minds) for mathematics. Analysis without interpretation is a half-measure and our work from the beginning has been to make sense of one’s motion by providing a more structured, rigorous, and quantitative approach that can both massively scale and co-exist with experienced observers of a particular sport. We are working on the system that can quickly and intuitively take a simple act of jumping or running across the screen into an analytics playground with metrics and data visualization.

Integrated system (selectable pose tracking, people tracking, Stromotion)

While each of our core technologies noted above has been developed separately, the plan was always to bring them together into a single and unified platform experience. The aim is to give significant and far-reaching powers of analysis to our customers who, while leveraging the same system, can approach, analyze and strategize in vastly different ways. Our objective is simply to make a sense of motion in a far more accurate (and more enjoyable) way than any other system on the planet — to empower individuals to analyze their motion and that of their students, teammates or competitors — everywhere and anywhere on the planet!

iOS prototype

We are building out the design of our prototype on the iPhone which will enable users (both students and coaches) to upload a video of their game and have their motion analyzed. Specific metrics will be available based on the specific motion (e.g., running, golf, tennis) and results can be shared with or exported to friends and other professionals, alike. We’ll be showing screenshots of the mobile concept in future posts and will announce when the demo on testflight is launched for your feedback.

As always, feel free to reach us at hello@ideaslab.com to learn more!

In the meantime, stay safe and keep moving!

ABOUT IDEAS LAB

Ideas Lab is an innovation lab and start-up studio building proprietary artificial intelligence, machine vision, and human motion analysis technologies. Today, while developing a suite of AI-based solutions, we are building a network of corporate and academic partners with whom together we will improve human performance in the many ways people move, perform and play!

Visit us to learn more about Ideas Lab today!

--

--

Ideas Lab
Ideas Lab

Ideas Lab enables data-driven insights from the way people move, perform and play.