What is a UI element?
A UI element is any distinct component that makes up a digital interface. Buttons, input fields, checkboxes, dropdown menus, navigation bars, icons, progress indicators, modals, tooltips, cards, and banners are all UI elements. Together, they form the interactive and visual surface through which users access and engage with a product's functionality.
The term is broad by design. It describes both the smallest atomic components (a single icon, a toggle switch) and larger composite components (a form containing multiple fields, a navigation bar containing tabs and icons). What they share is that each is a defined, identifiable piece of the interface with specific visual properties, behaviors, and states.
UI elements sit at the intersection of visual design and interaction design. Their appearance is a visual design concern: what they look like in each state, how they communicate their affordances, and how they fit within the product's visual language. Their behavior is an interaction design concern: how they respond to input, what states they can be in, and how they communicate change to users.
How are UI elements categorized?
There's no single universal taxonomy, but UI elements are typically grouped by function into several broad categories.
- Input elements allow users to enter data or make selections: text fields, password fields, search inputs, textareas, checkboxes, radio buttons, toggles, sliders, date pickers, and file upload fields all fall here. Their design governs how clearly they communicate what kind of input is expected and how gracefully they handle errors.
- Navigation elements help users move through the product: tab bars, navigation bars, breadcrumbs, pagination, side menus, and back buttons are all navigation components. Their placement and labeling determine how well users understand the structure of the product and how to move through it.
- Action elements trigger behaviors: buttons, links, and icon buttons are the primary examples. A button's label, visual weight, and placement communicate what will happen when it's activated and how important that action is relative to other options on the screen.
- Informational elements surface content and status without requiring direct interaction: labels, headings, body text, tooltips, badges, progress indicators, empty states, and loading spinners all belong here. They carry the language and status communication of the interface.
- Container elements organize and group other elements: cards, modals, drawers, dialogs, and panels are all containers. They create the visual structure that helps users understand how content relates and what belongs together.
What are element states?
Every interactive UI element exists in multiple states, and designing for all of them is a core part of UI work.
- The default state is how an element appears when nothing has happened yet: a button at rest, an empty input field, an unchecked checkbox. This is often what gets the most design attention, but it's only one of several states a user will encounter.
- The hover state (on pointer devices) and the focus state (for keyboard navigation) indicate that an element is currently being pointed at or has keyboard focus. Both need to be visually distinct from the default to signal interactivity and confirm the user's attention.
- The active or pressed state appears at the moment of interaction: the brief visual change when a button is clicked or a touch is registered. It provides immediate feedback that the action is being processed.
- The disabled state indicates that an element exists but cannot be interacted with in the current context: a button that's greyed out because a required form field is empty, or a menu option that requires a permission the user doesn't have. Disabled elements should look visually distinct from enabled ones and, where possible, provide some indication of why they're disabled.
- Error and validation states are particularly important for input elements: the red border and error message on a field that failed validation, or the green checkmark on a field that's been successfully completed. These states communicate the result of an interaction and what the user needs to do next.
- Loading states appear when an element or the content it depends on is being fetched. A skeleton screen representing a card before its data loads, or a spinner within a button after it's clicked, prevent the interface from appearing broken while work is happening in the background.
How do UI elements relate to design systems?
Design systems are built around UI elements. The component library at the heart of a design system is essentially a catalog of UI elements: each element documented with its variants, states, usage guidelines, accessibility requirements, and the code that implements it.
The value of systematizing UI elements is consistency and efficiency. When a button component is defined once, every button in the product uses the same visual treatment, spacing, and interaction behavior. When the button needs to change, updating the component propagates the change everywhere it's used. Without this systematization, buttons accumulate visual drift across screens as individual designers make slightly different decisions, and updates require manually finding and correcting every instance.
Design tokens are the layer beneath components. They store the values, colors, spacing units, border radii, and shadows that components reference. A button's background color isn't specified directly in the component; it references a token like color-interactive-primary. When that token changes, every component that uses it updates automatically.
How has the design of UI elements been changing lately?
Several shifts have affected how UI elements are designed and built in recent years.
- Accessibility has moved from a checklist item to an embedded requirement. Design systems now increasingly build WCAG compliance into elements directly: touch targets meet minimum sizes, focus states are explicitly designed rather than browser-defaulted, color usage follows contrast requirements, and ARIA attributes are specified alongside visual specs. The result is that elements built on these systems inherit accessibility by default rather than requiring separate per-feature work.
- AI-assisted generation has changed how elements are explored in early design stages. Tools like Figma Make can generate component compositions from text prompts, giving designers a starting point that can be refined rather than requiring manual construction from scratch. The generated output typically conforms to common patterns and spacing conventions, but it requires evaluation against the product's specific design system and user needs.
- The line between design and code has narrowed. Design tokens shared between Figma and the codebase, and tools that generate production-ready component code from design files, have reduced the gap between what a designer specifies and what gets built. This makes the visual specification of UI elements more consequential, since it more directly maps to what users experience in the final product.




