Simplifying AI Model Optimization
0-1 UX research and design leadership to establish cloud-based tools for evaluating and optimizing AI model performance on target hardware.

UX Research & Design Process
As a UX lead for the Deep Learning Workbench, a cloud-based software solution for testing and optimizing AI models, A mixed-methods research approach was applied to guide the software team from definition to implementation. The UX research process is summarized as follows:
Foundational research consisting of contextual inquiry, stakeholder & subject-matter expert interviews, task analysis and comparative benchmarking.
Identify existing state of experience and define user journeys (existing versus optimal) based on evaluating AI development tasks.
Communicate generative research insights and elicit design requirements from the team (technical). Facilitate design thinking workshops with cross-functional team.
Establish & test design assumptions and constraints via initial wireframing and stakeholder interviews.
Based on an understanding of underlying needs. Design is informed by expert walkthroughs, personas, feature prioritization, business objectives and an understanding of technical constraints.
- Create conceptual prototype to establish initial design stimuli for user testing.
- Recruit, moderate and evaluate design mockups with AI developers for user feedback.
- Ratify design requirements as plan of record and negotiate trade offs with engineering team development.
- Develop high fidelity prototype with realistic design guidelines and ‘clickable’ end-to-end simulated functionality for usability testing.
- Usability test design prototype and establish validated requirements for MVP product development.
- Usability test functional beta build to pinpoint areas of UX deficiency due to technical workarounds or developer deviations from design specification.
- Perform detailed heuristic evaluations and communicate strategic UX design objectives to prioritize future feature innovations and iteratively improve UX quality. Informed by UX assessments, summative usability results, heuristic evaluations and longitudinal beta testing.
- Manage UX defects identified in JIRA and advocate for prioritization and iterative improvements.
- Define UX roadmap.
Initial Discovery Research
Before the Deep Learning Workbench was created, AI developers working with Intel hardware needed to use a command-line interface and practice frequent trial and error testing to optimize AI models so they met latency and throughput requirements expected for a particular use case. To understand how AI developers performance test and optimize AI models, we interviewed and observed tasks with dozens of AI developers working for companies in healthcare, transportation, manufacturing, security and enterprise/consumer focused industries. This initial research discovered the following challenges with existing workflows:
- Measuring and tuning the performance of AI models has a steep learning curve and is perceived as a black art.
- Reaching optimal performance with trade-offs is experimental and time consuming.
- Interpreting performance of AI models on target hardware with existing command-line based tools is cumbersome.
- Comparing AI model performance on different hardware configurations is highly desired but difficult to accomplish and requires custom testing methods.
UX Definition
Feedback from initial discovery research coupled with internal stakeholder discussions provided us with the context needed to develop wireframes and preliminary concepts. We took those concepts back into the field to conduct task analysis and contextual workflow research and applied learnings to define an end-to-end clickable prototype. User research (lab-based and remotely) was conducted with the following design stimuli:

Concept Testing
Early wireframing allowed us to test design assumptions and help the team align on product requirements. Preliminary user testing with small groups of internal and external participants provided design insights early in the process.

Prototyping
A clickable prototype was developed using Axure. Iterative changes were subsequently made to the design based on usability assessment criteria and task-based feedback.

Functional Application
Once the application was developed based on agreed upon UX requirements, summative user testing was applied to validate UX quality against technical trade-offs made and identify UX defects prior to release.
Evaluative Research & UX Assessment
MVP Developer Experience Assessment
Usability studies were conducted with iterative prototypes and results were summarized via a UX MVP score that was calculated with the following metrics:
1. Task completion rate and time on task
2. Subjective assessment data from users performing each task
3. Developer ratings from system usability scale (SUS) performed after all tasks were completed.
2. Expert assessment rating from SUS

UX Asessment
Expert reviews based on best practices and heuristics were delivered to product teams once a fully functional prototype was developed.


Detailed User Research Reports
Usability testing was conducted iteratively after major development milestones were completed. Reports containing detailed insights from usability testing were frequently presented to a cross-functional product team.


