Kainos Studio

Designing an interface for non-technical users to customize machine-learning models

Project Objective
The creation of customized AI-tools to automate tasks such as data analysis, summarizing documents, or bulk-editing photos, has been largely inaccessible to those without machine-learning experience. Kainos Studio addresses this issue by providing a beginner-friendly and educational platform for users to engage with machine learning through a guided, interactive, and simplified interface. Kainos Studio distinguishes itself as an educational resource through interactive learning, and users will leave Kainos Studio with an understanding of fine-tuning, the process of tailoring an AI model to being task-specific.
Deliverable
Fully interactive and research-driven prototype that educates and empowers users to customize machine learning models
Client
IBM Research
Team
2 Researchers
2 Designers
2 IBM Designers
3 IBM Engineers
My Role
Experience Design
Prototyping
User Research
Usability Testing  
Success Metrics
in progress :) 
Methods Used
Literature Analysis
Competitive Analysis
User interviews
Shadowing
Qualitative Analysis
Prototyping
Usability Testing
Timeline
April 2023 - December 2023
Problem Statement
How can we make the process of creating AI tools, and the technical knowledge needed more accessible for those without the technical know-how?
Background
Customized AI tools can make life much easier
The creation of personal-use AI tools previously entailed building a machine-learning model from the ground up. But with the rise in popularity of foundation models, general-purpose and adaptable machine learning models, it is now possible for anyone to create their own AI tool. These AI tools can make tasks such as sorting, scheduling, analyzing, and generating content much easier. Below are some of the use-cases of custom-tailored AI tools.
However...
The creation of AI-tools is STILL really hard
Though the ability to create AI tools is technically available to anyone and free, there is a plethora of required background knowledge needed to successfully navigate the process. The following is a video I created to demonstrate the many decisions needed to be made through the fine-tuning process, the process of tailoring an AI model to being task-specific.
It's also very hard to get into this space
It's difficult for nontechnical users to get into this domain, none of existing methods to utilize these technologies are accessible to those with little machine-learning experience.
Solution
Say hello to Kainos
Introducing Kainos Studios, the educational and streamed AI-Builder. Kainos is Greek for new, fresh, and something different. That's what we are bringing to the world of AI, a fresh and new way to do machine learning
Kainos Studios distinguishes itself from other AI builders by prioritizing educating users on various machine-learning processes. Users leave our platform with a greater learning on how AI tools are created
Kainos Studio was created to supplement IBM WatsonX, IBM's existing AI tool creator. Kainos Studio provides a more streamlined and accessible platform with less technical jargon, giving beginner users an easier entry point before utilizing the more feature-rich WatsonX
No more gatekeeping ML
The stages of fine-tuning :  demystified
Kainos streamlines the process of fine-tuning by breaking it down into several linear steps, presenting users with a generalized, agnostic understanding of fine-tuning. Users know exactly where they are in their time with Kainos, and have a clear vision of what's to come at all times
Product Video
Machine Learning, Democratized for all
Research
Looking into other AI factories
To better understand the AI creation landscape, we holistically analyzed 20 different AI services, chatbots, and ML(Machine learning) builders. Overall, we found that they largely fell into one of three categories.

The first were elegant, streamlined solutions (Fig1). These solutions were easy to get started with, but didn't do much to explain the underlying processes going on, alienating users.

The second were feature-rich no-code solutions (Fig2). These are valuable for those with previous ML knowledge, but can be intimidating for non-technical users  
A the third solution type were popular machine-learning development environments, such as "Jupyter Notebook".

We conducted an audit on these three types of AI-customization services, hoping to gain insight on the advantages of each of them. However, we realized that none of the solutions prioritized educating and explaining to users the underlying machine learning processes, causing an overall alienation for nontechnical users. This was useful for us to discover, as it provided us an opportunity area to diversify our solution from the rest of the AI-creation tools.

Interviews
TLDR: Machine learning is hard and confusing
We interviewed a total of 14 non-technical graduate students, IBM Engineers, machine-learning experts to help use identify common sentiments and experiences involving engaging with AI tools. After conducting qualitative analysis on our interview material, we noticed three common sentiments with AI tools and AI tool creation.

1) Trying to understand machine learning is frustrating.
2) People don't fully understand nor trust AI.
3) Preprocessing data for machine learning is annoying (for experienced users). 

Design Ideation
Watching the pros doing machine learning
No one in our team had ever customized or created their own AI tool from scratch before. So to get a better understanding of the fine-tuning process, we shadowed several IBM machine learning engineers to get a rough idea of the process. After several rounds of ideation and verification with engineering experts, we derived a simple, 8 step linear progression for our platform. This way, our users would have a continuous sense of progress as they advance through the steps
Initial Designs
How we brought Kainos to life
We were fortunate to have access to the IBM carbon design system, so we were able to use portions of our final components directly in our initial design solutions. This was hugely beneficial during usability testing later
Several of our designs do little for  advancing users through the fine-tuning process, but by prioritizing educating and explaining the processes in our solution, we bridge the technical gap for non-technical users
We empathized security and privacy measures throughout our application, as well as confirmations on user-uploaded content to better garner trust with our users.  
We ideated on making the preprocessing step in our application completely automated, as this stage provided the most friction based on our interviews and technical consultation.
Usability Testing
Bringing Kainos into the fray
We conducted usability testing on our design with 7 participants ranging for IBM engineers, qualitative researchers, and graduate students. Below were some of the major testing themes we observed

Interaction Variety

Our stylings on buttons and links were not the same. This caused users to be confused on when objects were interactable and when they were static

Transparency

Relevant information was hidden from users. Users had to make multiple clicks to see relevant info, such as uploading requirements

Intuitive Actions

Users were unaware of actions needed to be made to progress in our solution. Some users got stuck midway through our prototype

We increased the size of the task selection tiles[1] to display more information to users, allowing them to select an appropriate tasks without needing additional clicks. We also made the flow more clear by adding an inactive next button[2]

User were unaware that they needed to select "Yes" or "No" before continuing, so we removed unnecessary information[5] to bring more attention towards it. We also added an inactive continue button[6] to further highlight this required action

To make the flow of integration with existing IBM services more more clear, we increased the size of that tile[3] and also added the option to use sample data. We also made the data requirements more[4] transparent by directly displaying them.

Our stylings on buttons and links were not consistent, confusing users. So we changed updated them to being more universally consistent[7]. We also made the signifiers of optional page actions more apparent[8]

Research
Looking at other AI Factories
In a snapshot, our user research included literature reviews to better familiarize ourselves with the existing machine-learning landscape, competitive analysis on existing AI-tool builders, user interviews to understand common frustrations and sentiments on AI
Competitive Analysis
Touring other AI Factories
We studied 20 different AI services, chatbots, and ML builders to better understand the existing fine-tuning landscape. Though existing solutions currently exist, we noticed that they often fell into one of two categories
The first category were elegant, streamlined solutions. Though these solutions are accessible to non-technical users, they do little in explaining the underlying processes going on, isolating end users
The second were Feature-rich, no-code tools. These solutions are convenient for developers with ML experience, but are not usable for those without technical knowledge
Competitive Analysis Matrix
Other solutions included popular ML development environments such as "Jupyter Notebook". Though all of the above services sufficed at providing an environment to create AI tools, there was little emphasis on educating and explaining fine-tuning and machine-learning concepts to users, resulting in an alienation from the overall process. This was greatly beneficial for us discover, as this was an opportunity for our solution to stand out in the AI-creation landscape
Interviews
Holistically analyzing 20 hours of material
We interviewed a total of 14 participants consisting of IBM Engineers, UCSC Natural-Language Processing Students, and both ML-versed and non ML-versed Doctoral students

Participant Overview

We had close to 20 hours of interviews to code, so we synthesized research findings by first qualitatively labeling interview sentiments. We then created groups out of those labels, known as axial coding. Employing the two research methodologies allowed us to then holistically create findings directly from our interviews without letting our own biases dictate our research findings

Axial coding

Major  Interview Themes
TLDR: Machine learning is hard and confusing
Overall, participants were excited by the prospects of machine learning tools augmenting their workflows, but they were struggling with the following:
1)  The process of understanding machine learning
2) Being able to trust AI suggestions
3) Preparing their data for AI work (for experienced users)   
Jump to next section
User Flow
Watching the pros doing Machine Learning
Because none of us on our team had prior experience with Fine-tuning or Machine Learning, we shadowed ML engineers at IBM  and consulted with many ML experts from other organizations  to ideate on a simplified and agnostic fine-tuning process for our solution  
Streamlining the process
We developed an 8 step linear progression for our fine-tuning solution. Though fine-tuning is not strictly  linear, we believed that presenting the steps in such a way would provide users with a sense of progression
Lofi Diagrams
How we brought Kainos to life
To address the lack of explainability in existing AI products, we prioritized transparency and dedicated educational screens that explained sections of the fine-tuning process

Several of our screens do little for actually advancing users through the fine-tuning process, but by prioritizing educating and explaining the processes in our solution, we bridge the technical gap for our non-technical users

To garner trust with our users, we display all tentative steps in our AI-processes, as well as consistently reminding them of existing actions they have already taken

By adding additional guidelines on security protocols, data privacy, and other expository information, we can continue to garner user trust throughout their time in our solution  

Jump to next section
Usability Testing
Bringing Kainos into the fray
We ran a total of 7 tests of our software with participants ranging from IBM engineers, technical researchers, nontechnical researchers, and computational-media graduate students  

some user feedback we received

Major testing themes 

Interaction Variety

Our stylings on buttons and links were not the same. This caused users to be confused on when objects were interactable versus static

Transparency

Relevant information was hidden from users. Users had to make multiple clicks to see important information, such as requirements for uploaded user data

Intuitive Actions

Some users were unaware of actions needed to be made to progress in our solution. Several user stuck on progression thought there may have been errors with our solution

Changes Made

We increased the size of the task selection tiles[1] to display more information to users, allowing them to select an appropriate tasks without needing additional clicks. We also made it more clear that users must select a task by adding an inactive next button[2]

To make the ideal flow of integration with existing IBM services more more clear, we increased the size of that tile[3] while also adding an option to use sample data rather than user data. We also made the data requirements more[4] transparent by directly displaying it rathe than needing an additional interaction to view them

User were unaware that they needed to select "Yes" or "No" before continuing, so we removed unnecessary information[5] to bring more attention towards it. We also added an inactive continue button[6] to further highlight this required action

Our stylings on buttons and links were not consistent, confusing users. So we changed updated them to being more universally consistent[7]. We also made the signifiers of optional page actions more apparent[8], as users were often unaware of actions they could take

Back to top
Future work
Model Playground suite
Our platform allows us to create and customize models, but not play around with them or test them. We would like to create a model "playground" for nontechnical users to understand the best practices of how to use their models in the future  
Reflection
It's ok to not have domain expertise
None of us had any experience with machine learning as a technology, and so we really doubted our abilities to design a machine learning platform. However, being cognizant of our own knowledge limitations and relying on the expertise of Engineering specialists at IBM allowed us to contribute towards an unfamiliar domain in our own ways. This showed us why collaboration with engineers is so important as designers
Thank you for reading! Here are some of my other works
back to top