De-Mystifying the AI Developer Experience

Explore how UX research, strategy and design was orchestrated to prototype and scale a cloud-based platform that enables 150k+ AI developers to learn, create and optimize AI software applications on accelerated Edge hardware.

Problem Statement

In 2019, the race to establish best in class AI software development pipelines for computer vision, large language models and vertical market solutions was heating up.   At the time, Intel’s goal was to attract developers to use Intel hardware (including specialized AI acceleration chips) and software pipelines to create optimized AI solutions that could run locally on edge devices (e.g. autonomous vehicles, medical imaging, industrial automation).  However, Intel’s AI developer experience was fraught with a steep learning curve and lacked the accessibility needed to be widely adopted at scale.   

The UX challenge:  Discover how to best engage developers with an approachable cloud-based  platform for learning, prototyping, testing and optimizing AI applications for vertical market uses.  Scale and democratize AI development without the complexity of command-line and time consuming build-deploy-test workflows.  Identify how to attract new AI developers and drive adoption of the Intel ecosystem. 

UX Research Process

UX research was applied continuously from concept to prototype to production and guided product development priorities.

  • Foundational User Research

    Interviewed AI developers across industries and evaluated workflows and tools used by developers to go from idea to a functional AI application.

  • Design Requirements & Prototyping: Defining the AI Developer Experience

    Engaged with a cross-functional, multinational team to establish best UX practices within an Agile development environment and design the Developer Cloud for the Edge with prototyping, design system integration and UX leadership.

  • Benchmarking UX: Developer Experience Assessments

    Conduct usability studies with prototypes and subsequently MVP software workflows to gather insights to influence the product roadmap with capabilities such as enhanced Jupyter Notebooks with code snippets, sample AI applications and a Kubernetes container environment with performance dashboards. Then, continuously assess the state of experience post product launch by analyzing clickstreams, AI workloads and rich telemetry data. Deployed surveys and conducted task-based user studies to identify developer pain points and unmet needs and drive iterative enhancements into product.

Foundational Research

Observe and interview AI developers to understand workflows, tools and pain points

Research was designed to investigate the following questions through deep-dive contextual interviews, observational research and task-based walkthroughs with existing tools and workflows: 

  • What problems are customers trying to solve with AI on edge devices
  • What local and cloud-based tools are used to create AI solutions focused on Edge inferencing?
  • What requirements are important to ensure an Edge AI application is working as intended/expected?
  • What steps do AI software developers go through to create and test applications in support of optimized AI inference with a trained model and how do they subsequently deploy code?
  • How do developers take shortcuts or save time with existing tools when modifying code then testing on edge devices?
 

As an outcome of preliminary research, rich nisights were uncovered to establish developer journeys, UX roadmaps, design requirements and content development goals.    

The following diagrams illustrate a high-level view of the AI developer journey from a user-needs perspective as well as a task-based perspective:

Developer Journey for Creating AI Applications on Edge Devices (UX perspective)

AI Developer User Journey (Simplified)

AI Developer End-To-End Workflow Mapping

User Research Summary

Qualitative initial research consisting of contextual interviews with dozens of participants combined with task-based observations.     Developers walked through their existing AI coding workflows and then performed a series of developer tasks using tutorialized Jupyter Notebooks as stimuli (a Web-based environment for interactive coding familiar by many developers).  

This research established insights that laid the foundation for defining learning paths,  developer tool affordances and an overall UX design direction.

Feedback from this preliminary research also inspired feature innovations such as code snippets, data visualizations for AI model performance and a catalogue of sample applications to experiment with and view real-time inference output. 

Developer Goals Identified From Initial Research

Learn

  • Acquire skills to perform computer vision AI using Intel hardware
  • Acquire skills to set up industrial edge solutions using Intel hardware

  • Develop
  • Train, tune and deploy model (only tune and deploy fulfilled today by
    the Deep Learning Workbench)
  • Run sample applications in an interactive coding environment
  • Prototype in Python in an interactive coding environment using AI building blocks
  • Optimize
  • Assess application performance on hardware
  • Tune application based on latency and throughput trade-offs and expectations
  •  
  • Launch
  • Test E2E flow of AI application (input, inference, output, condition triggers, visualization/reporting)

UX Definition

Once the initial developer journey was understood and initial research was analyzed, I authored wireframes and engaged with UX designers to create the E2E UX flows for content pages, the hardware as a service application and coding environment aligned with Discover, Build, Optimize and Launch user journeys.  Developers were provided with tutorials, sample applications, code-snippets and AI benchmarking and optimization tools.  Below is a summary of the primary components:   1. Website containing content and tutorials  2. AI sample application catalog 3. Kubernetes container environment  4. Data visualization of AI model performance.   

The Devcloud for the Edge was designed to support new and experienced AI developers seeking to quickly experiment with the development of software applications that could run optimized on Edge devices.   The platform supported a wide range of vertical market and horizontal use cases for computer vision and large-language models.

A catalog of tutorialized Jupyter notebooks provided interactive examples of AI code with real-time feedback.  Users could specify which compute platforms to run inference on and compare performance results directly from a browser-based environment.  

Tactical Research - Benchmarking UX

Once the initial applications were developed based on my design guidance, user research studies were conducted in lab settings and remotely with global participants to measure usefulness, usability, supportiveness and consistency with developer workflows.  Data captured included time on task, task completion, error rates, think-aloud feedback and qualitative semantic differential measurements (SUS).  The research was conducted in a rapid, iterative fashion to drive improvements with existing workflow and prioritize new features with product development.

User Experience Assessment Dashboard

Summarizing Developer Experience

A dashboard was presented quarterly to vice presidents and other product champions to uplevel qualitative insights and quantitative metrics into actionable success criteria.  Data was aggregated from user research, heuristic evaluations, enrollment, adoption and usage metrics  and summarized for a forest for the trees view of the state of experience.

Iterative UX Audits

A containerized deployment workflow was designed and evaluated against usability heuristics.  IX/UX design change requests were documented in JIRA for prioritization by the frontend development team.   Once the code passed initial UX criteria, global user studies were conducted to refine the design and make adjustments prior to launch.