
MusicGen - Detailed Review
Music Tools

MusicGen - Product Overview
Introduction to MusicGen
MusicGen is an advanced AI model developed by Meta AI, specifically designed for music generation. Here’s a breakdown of its primary function, target audience, and key features:Primary Function
MusicGen is a text-to-music and melody-guided music generation model. It can create high-quality music samples based on text descriptions or audio prompts, such as melodies. This model operates as a single-stage auto-regressive Transformer, eliminating the need for multiple models or self-supervised semantic representations.Target Audience
The primary users of MusicGen are researchers in the fields of audio, machine learning, and artificial intelligence. Additionally, it is useful for machine learning enthusiasts and amateurs who want to experiment with generating music using text or melody inputs. This tool helps these users probe the limitations of generative models and comprehend the current capabilities of AI in music generation.Key Features
- Conditional Music Generation: MusicGen can generate music based on text descriptions or melodies, allowing for a high degree of control over the output.
- Single Model Architecture: Unlike some existing methods, MusicGen uses a single Language Model (LM) to generate music, simplifying the process and improving efficiency.
- High-Quality Output: The model is capable of producing high-quality music samples across various genres, including classical, jazz, pop, rock, and electronic music.
- Performance Metrics: MusicGen’s performance is evaluated using metrics such as Frechet Audio Distance, Kullback-Leibler Divergence, and CLAP Score, along with qualitative studies involving human participants to assess overall quality, text relevance, and melody adherence.
- User Interface: The model provides an intuitive interface that allows musicians to interact with the generated music, provide feedback, and make adjustments to align with their artistic vision.
Usage
MusicGen is available for research and amateur use, with multiple checkpoints and different model sizes (e.g., MusicGen-Large and MusicGen-Melody). Users can try out the model through various platforms, including Hugging Face Colab and Audiocraft Colab, and can install the necessary libraries and tools to run the model locally. Overall, MusicGen is a powerful tool that revolutionizes music creation by leveraging AI to generate original and high-quality music, making it accessible to a broader audience of researchers, enthusiasts, and musicians.
MusicGen - User Interface and Experience
User Interface
The WebUI of MusicGen provides a straightforward and intuitive interface. Here are some key aspects of its user interface:Test Run
Users can start with pre-set examples to ensure the tool is working correctly. This feature automatically populates the necessary fields, and the model generates a song within about 2 minutes, which can then be downloaded or played directly from the WebUI.Input Prompts
The interface allows users to input descriptive prompts to guide music generation. You can specify emotions, genres, beats per minute, and other musical elements in your prompts. This flexibility enables users to create music that aligns with their specific needs and preferences.Melody Guide
Users can upload an audio file to serve as a guide for song generation. This feature, known as Audiocraft, allows the AI to interpret and transform melodies into different styles or genres.Ease of Use
MusicGen is relatively easy to use, even for those without extensive technical or musical backgrounds. Here are some points highlighting its ease of use:Clear Instructions
The WebUI comes with clear instructions and examples, making it simple for users to get started. The documentation provided is comprehensive and includes FAQs, troubleshooting guides, and best practices.Local Setup
While running MusicGen locally requires some technical setup (including installing Python and nVidia’s CUDA Toolkit), the process is well-documented, and the repository on GitHub includes detailed guides to help users through the installation and usage.User-Friendly Web Interface
The web-based interface on Hugging Face is user-friendly and does not require advanced technical knowledge. Users can generate music quickly without needing to set up the tool locally.Overall User Experience
The overall user experience of MusicGen is positive and engaging:Customizable Parameters
Users can modify generation parameters such as guidance scale and maximum length, giving them control over the music generation process.Community Support
MusicGen benefits from a community-driven approach, with the GitHub repository serving as a hub for developers and enthusiasts to collaborate, improve, and share knowledge about the tool.Feedback and Improvement
Although MusicGen itself does not have a built-in feedback feature like Google’s MusicLM, the community and documentation support help users refine their outputs and troubleshoot any issues. In summary, MusicGen’s user interface is designed to be accessible and easy to use, with a focus on providing a seamless experience for generating high-quality music based on various inputs.
MusicGen - Key Features and Functionality
MusicGen Overview
MusicGen, developed by Meta, is a sophisticated AI tool for music generation that offers a range of powerful features and functionalities. Here are the main features and how they work:
Text-Conditional Generation
MusicGen allows users to generate music based on text descriptions. You can input prompts that specify genre, tempo, emotions, and other musical elements. This feature leverages a single Language Model (LM) to translate text into high-quality music, giving users significant control over the output.
Melody Conditioning
This feature enables the generation of music based on melodic structures from other audio tracks or user-created melodies. Users can upload an audio file, and MusicGen will use it as a guide to create new music, allowing for creative transformations of melodies into different styles or genres.
Audio-Prompted Generation
MusicGen can generate music using existing audio clips as a basis. This feature, known as Audiocraft, allows users to input an audio file and have the AI interpret and transform it into new music, fostering creativity and innovation.
Advanced Model Architecture
MusicGen incorporates a text encoder, a language model-based decoder, and an audio encoder/decoder. This architecture enables the model to generate music in a single stage, eliminating the need for multiple models and making the process more efficient. It generates all necessary audio components in one pass, predicting them in parallel with a small delay between codebooks.
Flexible Generation Modes
The tool offers both greedy and sampling generation modes. The sampling mode is recommended for better results, as it introduces randomness to explore a wider range of musical possibilities.
Unconditional Generation
MusicGen can generate music without specific prompts or inputs. This feature allows the AI to create music purely based on its training data, which includes 20,000 hours of diverse licensed music.
Customizable Generation Process
Users can modify various generation parameters such as the guidance scale and maximum length of the generated music. This customization allows for finer control over the output, making it more suitable for specific needs.
Extensive Training Dataset
MusicGen has been trained on an extensive dataset of 20,000 hours of licensed music, including high-quality tracks and instrumentals. This diverse training dataset enables the model to generate a wide range of music styles and genres.
User Interface and Accessibility
MusicGen is accessible through a user-friendly WebUI hosted on the Hugging Face platform. Users can select pre-set examples or input their own prompts to generate music. The model can also be run locally with the necessary dependencies installed.
Community and Support
MusicGen benefits from a community-driven approach, with the model and code available on GitHub. This openness allows for community support, updates, and contributions, ensuring the model remains versatile and up-to-date.
Conclusion
These features collectively make MusicGen a powerful tool for music composition, suitable for various applications including education, content creation, and professional music production. The integration of AI ensures high-quality music generation with significant user control and flexibility.

MusicGen - Performance and Accuracy
The MusicGen Model
The MusicGen model, developed by Facebook, is a significant player in the AI-driven music generation category, but it comes with several performance metrics and limitations that are important to consider.
Performance Metrics
MusicGen has been evaluated on various objective metrics to assess its performance:
- Frechet Audio Distance: This metric measures the similarity between generated audio and real audio. For the MusicGen Small model, this distance is 4.88.
- Kullback-Leibler Divergence (KLD): This measures the difference between the label distributions of generated and real audio. The KLD for MusicGen Small is 1.42.
- Text Consistency: This evaluates how well the generated music aligns with the provided text description. MusicGen Small achieves a text consistency score of 0.27.
- Chroma Cosine Similarity: Although not provided for the Small model, this metric is used for other variants like MusicGen Melody, indicating how well the generated music matches the chroma features of the input melody.
Limitations
Despite its capabilities, MusicGen has several limitations:
- Vocal Generation: The model is unable to generate realistic vocals. Vocals have been removed from the training data using music source separation methods.
- Language Dependency: MusicGen has been trained with English descriptions and may not perform as well with descriptions in other languages.
- Music Style and Culture: The model does not perform equally well across all music styles and cultures. There is a potential bias in the training dataset, which may not represent all music genres and cultures equally.
- Song Structure: Sometimes, the model generates songs that end abruptly, collapsing to silence. This indicates a struggle with maintaining long-term structure and musical coherence.
- Prompt Engineering: It can be challenging to determine the best text descriptions for generating satisfying music samples, and prompt engineering may be necessary.
Areas for Improvement
Several areas need attention for improving MusicGen:
- Long-Term Structure: The model struggles with maintaining long-term structure and musical coherence, which is a common issue in generative music systems.
- Audio Fidelity: While recent models have improved audio fidelity, there is still room for enhancement to reach professional audio production quality.
- Semantic Mapping: There is a challenge in finding a good mapping between words and music due to the subjective nature of music perception.
- Creative Control: Users have limited creative control over the generated music, as they can only provide an initial text description and may need to start over if the result is not satisfactory.
- Data Diversity: The training dataset lacks diversity in music cultures and genres, which affects the model’s performance across different styles.
Strategies for Overcoming Limitations
To address these limitations, several strategies can be employed:
- Hybrid Systems: Combining different models or techniques can help overcome specific limitations, such as integrating models that handle vocals or different music styles better.
- Open Source: Pushing for open-source models and datasets can help in improving diversity and representation in the training data.
- Focus on Small Models: Smaller models like MusicGen Small can be more efficient and easier to fine-tune, potentially offering better performance in specific areas.
- Bridging the Gap between Engineers and Creatives: Better collaboration between engineers and creatives can lead to more user-friendly interfaces and more effective control over the music generation process.
By acknowledging these limitations and working on these areas, MusicGen can be further improved to provide more accurate, engaging, and diverse music generation capabilities.

MusicGen - Pricing and Plans
The Pricing Structure of MusicGen
The pricing structure of MusicGen, developed by Meta, is relatively straightforward and favorable for users, especially given its free access.
Free Version
MusicGen is entirely free to use. There are no subscription fees, hidden charges, or login requirements. This makes it highly accessible for anyone interested in generating music using AI.
Features in the Free Version
The free version of MusicGen offers a wide range of features, including:
- Text-Conditional Generation: Generate music based on text descriptions specifying genre, tempo, and other parameters.
- Audio-Prompted Generation: Use existing audio clips as a basis for new music creation.
- Melody Conditioning: Generate music based on melodic structures from other audio tracks or user-created melodies.
- Unconditional Generation: Generate music without specific prompts or inputs.
- Customizable Generation Process: Modify parameters like guidance scale and maximum length.
- Multiple Music Styles: Generate music in various styles, such as pop, rock, classical, and more.
- Download Options: Download generated music in various formats for personal or commercial use.
Commercial Use
MusicGen is also available for commercial use. The code and models are released as open source, allowing users to utilize the generated music for professional projects without additional costs.
No Paid Tiers
There are no paid tiers or premium plans mentioned for MusicGen. The tool is fully functional and free for all users, making it an excellent resource for both hobbyists and professionals.
Conclusion
In summary, MusicGen offers a comprehensive set of features without any cost, making it a highly accessible and valuable tool for music generation.

MusicGen - Integration and Compatibility
Integrating MusicGen
Integrating MusicGen, an AI music generation tool developed by Meta, involves several steps and considerations to ensure compatibility across different platforms and devices.
Platform Compatibility
MusicGen is highly compatible with various platforms, particularly those that support GPU processing, which is essential for running the model efficiently.
- Linux and GPU Support: To run MusicGen locally, you need a Linux OS and at least one GPU. AMD GPUs, for example, can be used with the ROCm platform, as demonstrated in the ROCm blogs.
- Hugging Face Platform: MusicGen is also available on the Hugging Face platform, which provides a hub for state-of-the-art machine learning models. This allows users to access and run MusicGen through the Hugging Face interface.
Software Requirements
To integrate MusicGen, you need to ensure your system meets the necessary software requirements:
- Python and Dependencies: You must have Python installed along with the required libraries, such as PyTorch and the transformers library from Hugging Face. The `requirements.txt` file in the MusicGen repository lists all the necessary dependencies.
- CUDA Toolkit: For local setup, installing nVidia’s CUDA Toolkit or using ROCm for AMD GPUs is crucial for leveraging GPU capabilities.
Deployment and Customization
MusicGen can be deployed in various ways to suit different needs:
- Custom Inference Endpoints: You can deploy MusicGen using custom inference endpoints by duplicating the MusicGen repository, adding a custom handler and dependencies, and creating an inference endpoint. This method ensures ease of access and deployment.
- Local Setup: For a local setup, you need to clone the MusicGen code from GitHub, install the required packages, and run the MusicGen application. This involves setting up the environment, installing FFmpeg, and launching the MusicGen app.
Integration with Other Tools
MusicGen integrates well with other tools and libraries:
- Hugging Face Transformers: MusicGen is part of the Hugging Face Transformers library, which allows seamless integration with other models and tools available on the platform.
- AudioCraft: MusicGen is part of the AudioCraft project, which provides additional features like Audiocraft for using audio files as guides for song generation. This integration enhances the versatility of MusicGen.
User Interface and Accessibility
MusicGen offers a user-friendly interface, especially through the WebUI provided on the Hugging Face platform:
- WebUI: The WebUI allows users to generate music using text prompts, melodies, or audio clips. It provides a straightforward way to test and generate music without extensive technical knowledge.
- Desktop Shortcuts: For local setups, users can create desktop shortcuts to launch MusicGen directly, making it more accessible.
In summary, MusicGen is highly compatible with various platforms and devices, particularly those with GPU support. It integrates well with other tools and libraries, such as Hugging Face Transformers and AudioCraft, and offers a user-friendly interface through its WebUI and local setup options.

MusicGen - Customer Support and Resources
Community and Support
MusicGen benefits from the vibrant community and support ecosystem of Hugging Face. This includes:
Forums and Discussions
Users can engage with a community of developers and musicians through forums and discussion boards on the Hugging Face website. This community support is invaluable for sharing experiences, asking questions, and getting insights from others who are using the models.
Tutorials and Documentation
Hugging Face provides comprehensive tutorials and documentation to help users get started with MusicGen. These resources include step-by-step guides on how to load the model, generate music, and fine-tune the model for specific needs.
Technical Support
Model Documentation
Detailed documentation on the MusicGen model is available, explaining its architecture, how it works, and how to use it effectively. This includes information on the model’s checkpoints (Small, Medium, and Large) and the use of the EnCodec audio tokenizer model.
Code Snippets and Examples
Users can find code snippets and examples that demonstrate how to use MusicGen for text-to-music generation. These examples help in understanding the practical application of the model.
Fine-Tuning Resources
Fine-Tuning Guides
There are specific guides available on how to fine-tune MusicGen for text-conditioned music generation. These guides provide a comprehensive approach to adjusting the model to meet specific musical styles or genres.
Access to Pre-Trained Models
Model Repository
Hugging Face hosts a repository of pre-trained MusicGen models that users can access and use directly. This repository is continuously updated with new models and improvements contributed by the community.
While MusicGen itself does not have a dedicated customer support hotline, the support and resources provided through the Hugging Face platform ensure that users have ample assistance in using and optimizing the model for their needs.

MusicGen - Pros and Cons
Advantages of MusicGen
Efficient Audio Processing
MusicGen utilizes the modern audio tokenizer model EnCodec, which converts long, continuous audio representations into short, discrete tokens. This approach significantly reduces the computational power needed for processing audio while retaining essential musical features.
Text Conditioning
MusicGen allows for music generation conditioned by text descriptions. This feature enables users to create music that reflects the characteristics specified by the text input, making it versatile for various creative needs.
Customization and Flexibility
Users can generate music with lyrics or purely instrumental tracks, and customize the style or scene to match specific genres, moods, or scenes. This flexibility makes MusicGen useful for musicians, content creators, game developers, and music enthusiasts.
Improved Song Structure and Sound
MusicGen uses an improved model that enhances the coherence of melodies, harmonies, and rhythms, resulting in better-structured and higher-quality music. It also allows generating two songs at once, which can aid in experimentation and workflow.
Accessibility
MusicGen is available for free and does not require a login, making it accessible to anyone interested in AI music generation. It is also available through platforms like HuggingFace, which provides an easy-to-use interface.
Disadvantages of MusicGen
Limitations in Vocal Generation
MusicGen is not capable of generating realistic vocals. The model has been trained with data where vocals have been removed, which limits its ability to produce songs with singing.
Language and Cultural Biases
The model has been trained primarily with English descriptions and may not perform as well with other languages. Additionally, the training data lacks diversity in music cultures, which can result in biased or less representative music generation.
Style and Genre Limitations
MusicGen does not perform equally well for all music styles and cultures. It may struggle with certain genres or styles that are not well-represented in the training data.
Error Propagation
Generating music with long sequences of tokens can be inefficient and error-prone. Errors in early tokens can compound, affecting the accuracy of subsequent tokens, although strategies like the ‘delay strategy’ are used to mitigate this.
Prompt Engineering Challenges
It can be difficult to determine what types of text descriptions provide the best music generations. Prompt engineering may be required to obtain satisfying results, which can be time-consuming and require some trial and error.
End of Song Issues
Sometimes, MusicGen generates music that collapses to silence at the end of the song, which can be inconvenient for users looking for complete tracks.
By considering these points, users can better understand the capabilities and limitations of MusicGen, helping them to make informed decisions about its use in their creative projects.

MusicGen - Comparison with Competitors
Unique Features of MusicGen
- Versatile Generation Modes: MusicGen offers multiple generation modes, including text-conditional, audio-prompted, and unconditional generation. This flexibility allows users to create music based on text descriptions, melodies, or without any specific prompts.
- Advanced Model Architecture: MusicGen incorporates a sophisticated architecture involving a text encoder, a language model-based decoder, and an audio encoder/decoder. This setup enables the generation of high-quality music that closely resembles the described genres and styles.
- Customizable Parameters: Users can modify generation parameters such as guidance scale and maximum length, providing more control over the music creation process.
- Extensive Training Dataset: MusicGen is trained on 20,000 hours of diverse licensed music, which enhances its ability to generate coherent and professional-sounding tracks.
Comparison with Other Tools
Soundraw
- Best For: Soundraw is ideal for creating royalty-free music with preset elements. It lacks the flexibility of MusicGen but offers unique, royalty-free tracks generated from original sounds. Soundraw has limited music editing features and relies on preset options for genre, tempo, and mood.
- Key Difference: Unlike MusicGen, Soundraw does not offer text-conditional or audio-prompted generation. It is more suited for users needing quick, royalty-free music without extensive customization options.
Udio
- Best For: Udio is known for generating music with text prompts, similar to MusicGen. However, Udio’s features are more streamlined, and it does not offer the same level of customization or advanced model architecture as MusicGen.
- Key Difference: Udio’s interface is simpler, and while it can generate high-quality tracks from text prompts, it may not match the versatility and control offered by MusicGen.
AIVA
- Best For: AIVA is one of the first companies to solve instant music generation with AI and offers a wide range of genres and generative attributes like mood, theme, length, tempo, and instruments. AIVA requires no musical knowledge and includes a MIDI editor for further customization.
- Key Difference: AIVA’s focus is broader, covering more genres and including a MIDI editor, which may appeal to users who want to edit their generated music further. However, AIVA’s free plan is limited to three downloads per month, unlike MusicGen which is free and accessible on the Hugging Face platform.
Bandlab SongStarter
- Best For: Bandlab SongStarter is part of a remote music collaboration app and is good for generating tracks in various genres like pop, trap, and electronic. It allows users to select preferred music types and make adjustments to tempo and key signature after generation.
- Key Difference: Bandlab SongStarter is more integrated with a digital studio, allowing for post-generation editing of tracks. However, its generative attributes are limited compared to MusicGen, and it does not offer the same level of text-conditional or audio-prompted generation.
Potential Alternatives
- Google MusicFX (formerly MusicLM): This tool is known for its accurate text-to-song generation and is particularly good for musicians and non-musicians alike. However, it has limitations in terms of download availability and may include noise and artifacts in the audio output.
- Key Difference: MusicFX excels in text-to-song generation but lacks the versatility and customization options of MusicGen.

MusicGen - Frequently Asked Questions
What is MusicGen?
MusicGen is an AI tool developed by Meta for generating high-quality music based on text descriptions, melodies, or audio prompts. It uses a single-stage auto-regressive Transformer model to produce music conditioned on the given inputs.
How does MusicGen generate music?
MusicGen generates music through a three-stage process:
- Text Encoder: Maps text inputs to a sequence of hidden-state representations using a frozen text encoder (e.g., T5 or Flan-T5).
- MusicGen Decoder: An auto-regressive language model that generates audio tokens conditioned on the encoder’s hidden-state representations.
- Audio Encoder/Decoder: Encodes audio prompts and decodes the generated audio tokens to recover the audio waveform.
What types of inputs can MusicGen use?
MusicGen can use various types of inputs, including:
- Text Descriptions: Users can provide text descriptions specifying genre, tempo, and other musical parameters.
- Melodic Structures: MusicGen can generate music based on melodic structures from other audio tracks or user-created melodies.
- Audio Prompts: Existing audio clips can be used as a basis for new music creation.
How do I use MusicGen?
You can use MusicGen through the Hugging Face platform. Here are the steps:
- WebUI: Use the user-friendly WebUI to input descriptive prompts or upload audio files. The model generates music within a few minutes.
- Local Setup: For local use, you need to install Python, nVidia’s CUDA Toolkit, and other dependencies, then clone the MusicGen code from GitHub.
What are the generation modes available in MusicGen?
MusicGen offers several generation modes:
- Greedy Mode: Generates music based on the most likely next token.
- Sampling Mode: Recommended for better results, as it introduces randomness to explore different possibilities.
- Unconditional Generation: MusicGen can generate music without specific prompts or inputs.
How long does it take to generate music with MusicGen?
MusicGen can generate a 12-second music clip within a couple of minutes. The exact time may vary depending on the complexity of the input and the computational resources available.
What training data was used for MusicGen?
MusicGen was trained on 20,000 hours of diverse licensed music, including high-quality tracks and instrumentals. This extensive training dataset helps in generating high-quality music samples.
Can I customize the generation process in MusicGen?
Yes, you can customize the generation process by modifying parameters such as the guidance scale and maximum length of the generated music. This allows for more control over the output.
Where can I find the pre-trained checkpoints and code for MusicGen?
The pre-trained checkpoints and code for MusicGen are available on the Hugging Face Hub. You can access them through the Hugging Face platform or by cloning the repository from GitHub.
Is MusicGen open-source?
Yes, MusicGen is open-source. The code is available on GitHub, and the model can be tested online at Hugging Face.
What are the key features of the MusicGen model architecture?
The key features include:
- Advanced Model Architecture: Incorporates a text encoder, a language model-based decoder, and an audio encoder/decoder.
- Efficient Token Interleaving: Allows for generating all codebooks in a single forward pass without the need for cascading multiple models.

MusicGen - Conclusion and Recommendation
Final Assessment of MusicGen
MusicGen, developed by Meta, is a significant advancement in the field of AI-driven music generation. Here’s a comprehensive overview of its capabilities, benefits, and who would most benefit from using it.Key Features and Capabilities
MusicGen is a text-to-music model that generates high-quality music samples based on text descriptions or audio prompts. It operates as a single-stage auto-regressive Transformer model, which simplifies the music generation process compared to traditional cascading models.- Text and Melody Conditioning: MusicGen can be prompted by both text and melody, allowing users to create music that aligns with specific genres, moods, or styles.
- High-Quality Output: The model generates music that is melodically aligned with the given harmonic structure and adheres well to the provided text input.
- Efficient Generation: MusicGen can predict all necessary codebooks in one pass, reducing the number of auto-regressive steps required per second of audio.
- Customization: Users can choose between instrumental and vocal tracks, and customize the music style and genre to fit their needs.
Performance and Evaluation
MusicGen has been evaluated on various objective and subjective measures, including Frechet Audio Distance, Kullback-Leibler Divergence, and CLAP Score. Human studies have also been conducted to assess the overall quality, text relevance, and melodic adherence of the generated music. These evaluations indicate that MusicGen performs better than comparable models like MusicLM and Diffusion.Intended Users and Benefits
MusicGen is particularly beneficial for several groups:- Researchers: It is an invaluable tool for research in AI-based music generation, helping to probe and improve the limitations of generative models.
- Musicians and Composers: MusicGen aids in exploring new musical ideas, creating demo tracks, and developing complete compositions. It fosters creativity and innovation in music composition.
- Content Creators: YouTubers, influencers, and other content creators can use MusicGen to generate original background music, intro themes, and soundtracks tailored to their video content.
- Educational Institutions: Schools and universities can utilize MusicGen for teaching music composition and AI, allowing students to practice music creation and improve their creative and technical skills.
- Advertisers and Marketers: MusicGen helps in creating custom jingles or soundtracks that align with brand identities, enhancing marketing campaigns.
Recommendations
Given its capabilities and benefits, MusicGen is highly recommended for anyone interested in AI-driven music creation. Here are some key points to consider:- Ease of Use: MusicGen is user-friendly and does not require prior musical expertise. It allows users to generate music instantly without any barriers to entry.
- Customization: The model offers significant customization options, including the choice between instrumental and vocal tracks, and the ability to select specific genres or styles.
- Quality: MusicGen generates high-quality music that is well-aligned with the provided text or melody inputs, making it a valuable tool for both professional and amateur users.