The Company
Spotify is the world’s leading audio streaming platform, with over 500 million monthly active users across more than 180 markets. Its core mission is to unlock the potential of human creativity by giving a million creative artists the opportunity to live off their art, and billions of fans the opportunity to enjoy and be inspired by it.
As a company, Spotify focuses heavily on personalized listening, discovery-driven experiences, and user retention through emotional connection to music. From algorithmic recommendations like Discover Weekly to user-curated playlists, discovery remains one of Spotify’s biggest value propositions. While Spotify offers robust tools for exploring new music, one common entry point for discovery recognizing a song heard in the world around you is currently underserved within the app.
The Problem
Many Spotify users often hear music in everyday life at cafés, gyms, stores, or on social media and want to instantly identify the track and save it. Today, users must leave the Spotify app and rely on third-party tools like Shazam or SoundHound, then return to Spotify to find and save the track. This creates unnecessary friction and breaks the listening journey.
From a product perspective, this behavior represents a missed opportunity to retain users within the app, drive song engagement, and deepen the discovery loop. For users, the process feels disjointed, delayed, and inefficient especially when the moment to act is brief.
research
Research objective
To understand how Spotify users currently identify unfamiliar music they encounter in the real world, explore the pain points and behaviors around using third-party recognition apps, and uncover expectations for an integrated sound recognition feature within Spotiy. This research aims to inform the design of a seamless, intuitive in-app experience that supports spontaneous music discovery and encourages song saving and engagement.
Method #1: User Interviews
To gain qualitative insights into how users currently identify unfamiliar songs, I conducted 1:1 interviews with five Spotify users ranging from casual listeners to power users. The goal was to understand their real-world music discovery habits, frustrations with current solutions, and expectations for an in-app recognition feature. These conversations helped uncover patterns around behavior, emotional moments (like missing a song), and preferred recovery paths.
METHOD #2: COMPETITIVE ANALYSIS
I analyized key competitors in the sound recognition space, including Shazam, SoundHound, and Musixmatch, to understand how these platforms approach real-time recognition, UI feedback, and saving functionality. This analysis highlighted usability standards, feature gaps, and opportunities for Spotify to differentiate with a more seamless, music-centered flow.
Research findings
Competitor analysis chart
User persona
This project focused on users who frequently hear music in the real world and want to identify and save songs effrotlessly. Based on user interviews, I created a primary persona to represent these spontaneous discovery eneds.
too many steps to identify a song
Users often have to leave Spotify, open a third-party app (like Shazam), wait for detection, then return to Spotify to search and save hte song, creating unnecessary friction and a disjointed experience.
Missed moments of discovery
Participants frequently missed songs in real-life settings (e.g., cafes, shops, social media) because they didn’t open Shazam fast enough or were distracted. The window to capture a song is brief, and delayed action often means losing the opportunity entirely.
Limited search flexibility within spotify
Spotify’s current search functionality requires users to know the exact song title, artist, or album. Users expressed frustration that they couldn’t search using lyrics, vague phrases, or moods limiting their ability to find or rediscover music.
tap to identify button
Easily accessible in Spotify’s Home or Now Playing screen. Tap it to identify what’s playing around
Auto-id/background listening mode
Optional setting where Spotify listens passively (with permission) and automatically adds recognized songs to a “Discovered Nearby” playlist.
instant save + add to playlist
Once a song is recognized, users can immediately save it to their library or add it to a custom playlist in one tap.
recognition history log
Timeline view of all songs the user has identified, with quick access to play, save, or share.
real time lyrics display
If available, the app instantly shows synced lyrics for recognized tracks.
one-tap share
Quickly share the identified song via Spotify links, text, or social platforms.
“By the time I open Shazam, the song’s already over.”
-Matthew, Spotify User
how might we
Help users identify songs instantly and seamlessly within Spotify without needing to switch apps or lose the moment?
simplify the post-discovery procvess in Spotify so users can instantly save or add identified songs to their library or playlist?
low-fidelity WIREFRAMES
To begin translating research insights into tangible solutions, I created low-fidelity wireframes to map out the core user flow: activating sound recognition, receiving a result, and responding to both successful and failed identification attempts. These early screens focused on functionality, hierarchy, and user decision points without getting distracted by visual details.
The goal was to validate layout, interaction logic, and how clearly each step supported quick, intuitive music discovery.
mid-fidelity WIREFRAMES
After establishing the core structure in low fidelity, I moved into mid-fidelity wireframes to refine layout, spacing, and interaction details. This stage helped me focus on information hierarchy, button placement, and user flow continuity without the distraction of color, branding, or final typography.
The mid-fi wireframes were also used for early usability testing, allowing me to validate whether users could recognize system states, navigate error handling, and complete essential actions like saving a song. Based on feedback, I made several adjustments before moving to high fidelity, including emphasizing fallback options and improving listening state feedback.
Building on feedback from low-fidelity testing, the high-fidelity designs bring the feature to life within Spotify’s visual language. These mockups incorporate real UI patterns, branding, and motion cues to simulate a realistic in-app experience.
From subtle listening animations to a clean match result screen, every detail was designed to feel native to Spotify while reducing friction in the music discovery process. The interface guides users through recognition, saving, and recovery actions with clarity and confidence.
VIEW FINAL PROTOTYPE
What I tested
With these wireframes, I conducted usability testing focused on layout clarity, ease of navigation, and the booking process. Feedback helped validate that the site structure felt intuitive and trustworthy, and that users could complete their main task booking a consultation without confusion.
This phase helped bridge the gap between strategic structure and emotional design, and laid the foundation for a confident transition into high-fidelity visual design.
KEY TAKEAWAYS
Discoverability was strong: All users successfully located and initiated the feature from the Search tab with no assistance.
Listening state was mostly understood, though two participants suggested clearer visual feedback (e.g., stronger animation or timer).
Recovery needed refinement: “Search by Lyrics” was often missed or mistaken for non-clickable text, signaling the need for a more prominent UI treatment.
User expectations aligned with Spotify’s ecosystem: Participants expected a preview or playback immediately after identifying a song.
Results & impact
The testing validated that the feature’s core functionality works well and fits naturally within Spotify’s mobile experience.
Usability insights led to refinements in:
Visual feedback during the listening phase
Error recovery flow and button hierarchy
Strengthening user trust through progress indicators and clear labels
These changes improve the likelihood that users will stay in the app, complete the identification flow, and save discovered songs, which contributes directly to Spotify’s engagement and retention metrics.
Success metrics
100% task completion rate
All participants successfully completed the core flows: identifying a song, saving it, and recovering from an error.High discoverability of the feature
All users were able to locate and activate the recognition button without guidance.
Mixed success in error recovery
3 out of 5 users noticed the “Search by Lyrics” option; others overlooked it, indicating a need for improved visibility.
Strong user satisfaction
Most participants described the experience as smooth, intuitive, and aligned with their expectations of Spotify’s interface.
Minor confusion around listening feedback
2 users weren’t sure if the app was actively listening, highlighting a need for stronger visual cues during detection.