Interview Prep
Career
GenAI
Prompt Engg
ChatGPT
LLM
Langchain
RAG
AI Agents
Machine Learning
Deep Learning
GenAI Tools
LLMOps
Python
NLP
SQL
AIML Projects
Tarun R Jain Last Updated : 20 Jan, 2025
8 min read
Have you ever found yourself staring at a product’s ingredients list, googling unfamiliar chemical names to figure out what they mean? It’s a common struggle – deciphering complex product information on the spot can be overwhelming and time-consuming. Traditional methods, like searching for each ingredient individually, often lead to fragmented and confusing results. But what if there was a smarter, faster way to analyze product ingredients and get clear, actionable insights instantly? In this article, we’ll walk you through building aProduct Ingredients AnalyzerusingGemini 2.0,Phidata, andTavily Web Search. Let’s dive in and make sense of those ingredient lists once and for all!
Learning Objectives
- Design a Multimodal AI Agent architecture using Phidata and Gemini 2.0 for vision-language tasks.
- Integrate Tavily Web Search into agent workflows for better context and information retrieval.
- Build a Product Ingredient Analyzer Agent that combines image processing and web search for detailed product insights.
- Learn how system prompts and instructions guide agent behavior in multimodal tasks.
- Develop a Streamlit UI for real-time image analysis, nutrition details, and health-based suggestions.
This article was published as a part of theData Science Blogathon.
Table of contents
- What are Multimodal Systems?
- Multimodal Real-world Use Cases
- Why Multimodal Agent?
- Building Product Ingredient Analyzer Agent
- Important Links
- Conclusion
- Frequently Asked Questions
What are Multimodal Systems?
Multimodal systems process and understand multiple types of input data—like text, images, audio, and video—simultaneously. Vision-language models, such as Gemini 2.0 Flash, GPT-4o, Claude Sonnet 3.5, and Pixtral-12B, excel at understanding relationships between these modalities, extracting meaningful insights from complex inputs.
In this context, we focus on vision-language models that analyze images and generate textual insights. These systems combine computer vision and natural language processing to interpret visual information based on user prompts.
Multimodal Real-world Use Cases
Multimodal systems are transforming industries:
- Finance: Users can take screenshots of unfamiliar terms in online forms and get instant explanations.
- E-commerce: Shoppers can photograph product labels to receive detailed ingredient analysis and health insights.
- Education: Students can capture textbook diagrams and receive simplified explanations.
- Healthcare: Patients can scan medical reports or prescription labels for simplified explanations of terms and dosage instructions.
Why Multimodal Agent?
The shift from single-mode AI to multimodal agents marks a major leap in how we interact with AI systems. Here’s what makes multimodal agents so effective:
- They processboth visual and textual information simultaneously, delivering more accurate and context-aware responses.
- They simplify complex information, making it accessible to users who may struggle with technical terms or detailed content.
- Instead of manually searching for individual components, users can upload an image and receivecomprehensive analysis in one step.
- By combining tools likeweb searchandimage analysis, they provide more complete and reliable insights.
Building Product Ingredient Analyzer Agent

Let’s break down the implementation of a Product Ingredient Analysis Agent:
Step 1: Setup Dependencies
- Gemini 2.0 Flash: Handles multimodal processing with enhanced vision capabilities
- Tavily Search: Provides web search integration for additional context
- Phidata: Orchestrates the Agent system and manages workflows
- Streamlit: To develop the prototype into Web-based applications.
!pip install phidata google-generativeai tavily-python streamlit pillow
Step 2: API Setup and Configuration
In this step, we will set up the environment variables and gather the required API credentials to run this use case.
- For the Gemini API key, visit:https://aistudio.google.com/
- For the Tavily API key, visit:https://app.tavily.com/
from phi.agent import Agentfrom phi.model.google import Gemini # needs a api keyfrom phi.tools.tavily import TavilyTools # also needs a api keyimport osTAVILY_API_KEY = "<replace-your-api-key>"GOOGLE_API_KEY = "<replace-your-api-key>"os.environ['TAVILY_API_KEY'] = TAVILY_API_KEYos.environ['GOOGLE_API_KEY'] = GOOGLE_API_KEY
Step 3: System prompt and Instructions
To get better responses from language models, you need to write better prompts. This involves clearly defining the role and providing detailed instructions in the system prompt for the LLM.
Let’s define the role and responsibilities of an Agent with expertise in ingredient analysis and nutrition. The instructions should guide the Agent to systematically analyze food products, assess ingredients, consider dietary restrictions, and evaluate health implications.
SYSTEM_PROMPT = """You are an expert Food Product Analyst specialized in ingredient analysis and nutrition science. Your role is to analyze product ingredients, provide health insights, and identify potential concerns by combining ingredient analysis with scientific research. You utilize your nutritional knowledge and research works to provide evidence-based insights, making complex ingredient information accessible and actionable for users.Return your response in Markdown format. """INSTRUCTIONS = """* Read ingredient list from product image * Remember the user may not be educated about the product, break it down in simple words like explaining to 10 year kid* Identify artificial additives and preservatives* Check against major dietary restrictions (vegan, halal, kosher). Include this in response. * Rate nutritional value on scale of 1-5* Highlight key health implications or concerns* Suggest healthier alternatives if needed* Provide brief evidence-based recommendations* Use Search tool for getting context"""
Step 4: Define the Agent Object
The Agent, built using Phidata, is configured to process markdown formatting and operate based on the system prompt and instructions defined earlier. The reasoning model used in this example is Gemini 2.0 Flash, known for its superior ability to understand images and videos compared to other models.
For tool integration, we will use Tavily Search, an advanced web search engine that provides relevant context directly in response to user queries, avoiding unnecessary descriptions, URLs, and irrelevant parameters.
agent = Agent( model = Gemini(id="gemini-2.0-flash-exp"), tools = [TavilyTools()], markdown=True, system_prompt = SYSTEM_PROMPT, instructions = INSTRUCTIONS)
Step 5: Multimodal – Understanding the Image
With the Agent components now in place, the next step is to provide user input. This can be done in two ways: either by passing the image path or the URL, along with a user prompt specifying what information needs to be extracted from the provided image.
Approach: 1 Using Image Path

agent.print_response( "Analyze the product image", images = ["images/bournvita.jpg"], stream=True)
Output:

Approach: 2 Using URL
agent.print_response( "Analyze the product image", images = ["https://beardo.in/cdn/shop/products/9_2ba7ece4-0372-4a34-8040-5dc40c89f103.jpg?v=1703589764&width=1946"], stream=True)
Output:

Step 6: Develop the Web App using Streamlit
Now that we know how to execute the Multimodal Agent, let’s build the UI part using Streamlit.
import streamlit as stfrom PIL import Imagefrom io import BytesIOfrom tempfile import NamedTemporaryFilest.title("🔍 Product Ingredient Analyzer")
To optimize performance, define the Agent inference under a cached function. The cache decorator helps improve efficiency by reusing the Agent instance.
Since we are using Streamlit, which refreshes the entire page after each event loop or widget trigger, adding st.cache_resource ensures the function is not refreshed and saves it in the cache.
@st.cache_resourcedef get_agent(): return Agent( model=Gemini(id="gemini-2.0-flash-exp"), system_prompt=SYSTEM_PROMPT, instructions=INSTRUCTIONS, tools=[TavilyTools(api_key=os.getenv("TAVILY_API_KEY"))], markdown=True, )
When a new image path is provided by the user, the analyze_image function runs and executes the Agent object defined in get_agent. For real-time capture and the option to upload images, the uploaded file needs to be saved temporarily for processing.
The image is stored in a temporary file, and once the execution is completed, the temporary file is deleted to free up resources. This can be done using the NamedTemporaryFile function from the tempfile library.
def analyze_image(image_path): agent = get_agent() with st.spinner('Analyzing image...'): response = agent.run( "Analyze the given image", images=[image_path], ) st.markdown(response.content)def save_uploaded_file(uploaded_file): with NamedTemporaryFile(dir='.', suffix='.jpg', delete=False) as f: f.write(uploaded_file.getbuffer()) return f.name
For a better user interface, when a user selects an image, it is likely to have varying resolutions and sizes. To maintain a consistent layout and properly display the image, we can resize the uploaded or captured image to ensure it fits clearly on the screen.
The LANCZOS resampling algorithm provides high-quality resizing, particularly beneficial for product images where text clarity is crucial for ingredient analysis.
MAX_IMAGE_WIDTH = 300def resize_image_for_display(image_file): img = Image.open(image_file) aspect_ratio = img.height / img.width new_height = int(MAX_IMAGE_WIDTH * aspect_ratio) img = img.resize((MAX_IMAGE_WIDTH, new_height), Image.Resampling.LANCZOS) buf = BytesIO() img.save(buf, format="PNG") return buf.getvalue()
Step 7: UI Features for Streamlit
The interface is divided into three navigation tabs where the user can pick his choice of interests:
- Tab-1: Example Products that users can select to test the app
- Tab-2: Upload an Image of your choice if it’s already saved.
- Tab-3: Capture or Take a live photo and analyze the product.
We repeat the same logical flow for all the 3 tabs:
- First, choose the image of your choice and resize it to display on the Streamlit UI using st.image.
- Second, save that image in a temporary directory to process it to the Agent object.
- Third, analyze the image where the Agent execution will take place using Gemini 2.0 LLM and Tavily Search tool.
State management is handled through Streamlit’s session state, tracking selected examples and analysis status.

def main(): if 'selected_example' not in st.session_state: st.session_state.selected_example = None if 'analyze_clicked' not in st.session_state: st.session_state.analyze_clicked = False tab_examples, tab_upload, tab_camera = st.tabs([ "📚 Example Products", "📤 Upload Image", "📸 Take Photo" ]) with tab_examples: example_images = { "🥤 Energy Drink": "images/bournvita.jpg", "🥔 Potato Chips": "images/lays.jpg", "🧴 Shampoo": "images/shampoo.jpg" } cols = st.columns(3) for idx, (name, path) in enumerate(example_images.items()): with cols[idx]: if st.button(name, use_container_width=True): st.session_state.selected_example = path st.session_state.analyze_clicked = False with tab_upload: uploaded_file = st.file_uploader( "Upload product image", type=["jpg", "jpeg", "png"], help="Upload a clear image of the product's ingredient list" ) if uploaded_file: resized_image = resize_image_for_display(uploaded_file) st.image(resized_image, caption="Uploaded Image", use_container_width=False, width=MAX_IMAGE_WIDTH) if st.button("🔍 Analyze Uploaded Image", key="analyze_upload"): temp_path = save_uploaded_file(uploaded_file) analyze_image(temp_path) os.unlink(temp_path) with tab_camera: camera_photo = st.camera_input("Take a picture of the product") if camera_photo: resized_image = resize_image_for_display(camera_photo) st.image(resized_image, caption="Captured Photo", use_container_width=False, width=MAX_IMAGE_WIDTH) if st.button("🔍 Analyze Captured Photo", key="analyze_camera"): temp_path = save_uploaded_file(camera_photo) analyze_image(temp_path) os.unlink(temp_path) if st.session_state.selected_example: st.divider() st.subheader("Selected Product") resized_image = resize_image_for_display(st.session_state.selected_example) st.image(resized_image, caption="Selected Example", use_container_width=False, width=MAX_IMAGE_WIDTH) if st.button("🔍 Analyze Example", key="analyze_example") and not st.session_state.analyze_clicked: st.session_state.analyze_clicked = True analyze_image(st.session_state.selected_example)
Important Links
- You can find the full code here.
- Replace the “<replace-with-api-key>” placeholder with your keys.
- For tab_examples, you need to have a folder image. And save the images over there. Here is the GitHub URL with images directory here.
- If you are interested in using the use case, here is the deployed App here.
Conclusion
Multimodal AI agents represent a greater leap forward in how we can interact with and understand complex information in our daily lives. By combining vision processing, natural language understanding, and web search capabilities, these systems, like the Product Ingredient Analyzer, can provide instant, comprehensive analysis of products and their ingredients, making informed decision-making more accessible to everyone.
Key Takeaways
- Multimodal AI agents improve how we understand product information. They combine text and image analysis.
- With Phidata, an open-source framework, we can build and manage agent systems. These systems use models like GPT-4o and Gemini 2.0.
- Agents use tools like vision processing and web search. This makes their analysis more complete and accurate. LLMs have limited knowledge, so agents use tools to handle complex tasks better.
- Streamlit makes it easy to build web apps for LLM-based tools. Examples include RAG and multimodal agents.
- Good system prompts and instructions guide the agent. This ensures useful and accurate responses.
Frequently Asked Questions
Q1. Mention Multimodal Vision language models that are Open-Source
A. LLaVA (Large Language and Vision Assistant), Pixtral-12B by Mistral.AI,Multimodal-GPT by OpenFlamingo, NVILA by Nvidia, and Qwen model are a few open source or weights multimodal vision language models that process text and images for tasks like visual question answering.
Q2. Is Llama3 Multimodal?
A. Yes, Llama 3 is multimodal, and also Llama 3.2 Vision models (11B and 90B parameters) process both text and images, enabling tasks like image captioning and visual reasoning.
Q3. How is Multimodal LLM different from Multimodal Agent?
A. A Multimodal Large Language Model (LLM) processes and generates data across various modalities, such as text, images, and audio. In contrast, a Multimodal Agent utilizes such models to interact with its environment, perform tasks, and make decisions based on multimodal inputs, often integrating additional tools and systems to execute complex actions.
The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.
Tarun R Jain
Data Scientist at AI Planet || YouTube- AIWithTarun || Google Developer Expert in ML || Won 5 AI hackathons || Co-organizer of TensorFlow User Group Bangalore || Pie & AI Ambassador at DeepLearningAI
AdvancedAI AgentsGenerative AIGenerative AI Application
Free Courses
Responses From Readers
seo specialists germany
"Your brilliant analysis of business patterns resonates deeply! Through our work at ExplodingBrands (explodingbrands.de), Germany's trusted directory, we've witnessed similar trends. Our platform's extensive network across marketing, home services, and professional sectors validates your observations. As Deutschland's leading resource for business listings, we appreciate your exceptional insights!"
123
We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.
Show details
Powered By
Cookies
This site uses cookies to ensure that you get the best experience possible. To learn more about how we use cookies, please refer to our Privacy Policy & Cookies Policy.
brahmaid
It is needed for personalizing the website.
Expiry: Session
Type: HTTP
csrftoken
This cookie is used to prevent Cross-site request forgery (often abbreviated as CSRF) attacks of the website
Expiry: Session
Type: HTTPS
Identityid
Preserves the login/logout state of users across the whole site.
Expiry: Session
Type: HTTPS
sessionid
Preserves users' states across page requests.
Expiry: Session
Type: HTTPS
g_state
Google One-Tap login adds this g_state cookie to set the user status on how they interact with the One-Tap modal.
Expiry: 365 days
Type: HTTP
MUID
Used by Microsoft Clarity, to store and track visits across websites.
Expiry: 1 Year
Type: HTTP
_clck
Used by Microsoft Clarity, Persists the Clarity User ID and preferences, unique to that site, on the browser. This ensures that behavior in subsequent visits to the same site will be attributed to the same user ID.
Expiry: 1 Year
Type: HTTP
_clsk
Used by Microsoft Clarity, Connects multiple page views by a user into a single Clarity session recording.
Expiry: 1 Day
Type: HTTP
SRM_I
Collects user data is specifically adapted to the user or device. The user can also be followed outside of the loaded website, creating a picture of the visitor's behavior.
Expiry: 2 Years
Type: HTTP
SM
Use to measure the use of the website for internal analytics
Expiry: 1 Years
Type: HTTP
CLID
The cookie is set by embedded Microsoft Clarity scripts. The purpose of this cookie is for heatmap and session recording.
Expiry: 1 Year
Type: HTTP
SRM_B
Collected user data is specifically adapted to the user or device. The user can also be followed outside of the loaded website, creating a picture of the visitor's behavior.
Expiry: 2 Months
Type: HTTP
_gid
This cookie is installed by Google Analytics. The cookie is used to store information of how visitors use a website and helps in creating an analytics report of how the website is doing. The data collected includes the number of visitors, the source where they have come from, and the pages visited in an anonymous form.
Expiry: 399 Days
Type: HTTP
_ga_#
Used by Google Analytics, to store and count pageviews.
Expiry: 399 Days
Type: HTTP
_gat_#
Used by Google Analytics to collect data on the number of times a user has visited the website as well as dates for the first and most recent visit.
Expiry: 1 Day
Type: HTTP
collect
Used to send data to Google Analytics about the visitor's device and behavior. Tracks the visitor across devices and marketing channels.
Expiry: Session
Type: PIXEL
AEC
cookies ensure that requests within a browsing session are made by the user, and not by other sites.
Expiry: 6 Months
Type: HTTP
G_ENABLED_IDPS
use the cookie when customers want to make a referral from their gmail contacts; it helps auth the gmail account.
Expiry: 2 Years
Type: HTTP
test_cookie
This cookie is set by DoubleClick (which is owned by Google) to determine if the website visitor's browser supports cookies.
Expiry: 1 Year
Type: HTTP
_we_us
this is used to send push notification using webengage.
Expiry: 1 Year
Type: HTTP
WebKlipperAuth
used by webenage to track auth of webenagage.
Expiry: Session
Type: HTTP
ln_or
Linkedin sets this cookie to registers statistical data on users' behavior on the website for internal analytics.
Expiry: 1 Day
Type: HTTP
JSESSIONID
Use to maintain an anonymous user session by the server.
Expiry: 1 Year
Type: HTTP
li_rm
Used as part of the LinkedIn Remember Me feature and is set when a user clicks Remember Me on the device to make it easier for him or her to sign in to that device.
Expiry: 1 Year
Type: HTTP
AnalyticsSyncHistory
Used to store information about the time a sync with the lms_analytics cookie took place for users in the Designated Countries.
Expiry: 6 Months
Type: HTTP
lms_analytics
Used to store information about the time a sync with the AnalyticsSyncHistory cookie took place for users in the Designated Countries.
Expiry: 6 Months
Type: HTTP
liap
Cookie used for Sign-in with Linkedin and/or to allow for the Linkedin follow feature.
Expiry: 6 Months
Type: HTTP
visit
allow for the Linkedin follow feature.
Expiry: 1 Year
Type: HTTP
li_at
often used to identify you, including your name, interests, and previous activity.
Expiry: 2 Months
Type: HTTP
s_plt
Tracks the time that the previous page took to load
Expiry: Session
Type: HTTP
lang
Used to remember a user's language setting to ensure LinkedIn.com displays in the language selected by the user in their settings
Expiry: Session
Type: HTTP
s_tp
Tracks percent of page viewed
Expiry: Session
Type: HTTP
AMCV_14215E3D5995C57C0A495C55%40AdobeOrg
Indicates the start of a session for Adobe Experience Cloud
Expiry: Session
Type: HTTP
s_pltp
Provides page name value (URL) for use by Adobe Analytics
Expiry: Session
Type: HTTP
s_tslv
Used to retain and fetch time since last visit in Adobe Analytics
Expiry: 6 Months
Type: HTTP
li_theme
Remembers a user's display preference/theme setting
Expiry: 6 Months
Type: HTTP
li_theme_set
Remembers which users have updated their display / theme preferences
Expiry: 6 Months
Type: HTTP
We do not use cookies of this type.
_gcl_au
Used by Google Adsense, to store and track conversions.
Expiry: 3 Months
Type: HTTP
SID
Save certain preferences, for example the number of search results per page or activation of the SafeSearch Filter. Adjusts the ads that appear in Google Search.
Expiry: 2 Years
Type: HTTP
SAPISID
Save certain preferences, for example the number of search results per page or activation of the SafeSearch Filter. Adjusts the ads that appear in Google Search.
Expiry: 2 Years
Type: HTTP
__Secure-#
Save certain preferences, for example the number of search results per page or activation of the SafeSearch Filter. Adjusts the ads that appear in Google Search.
Expiry: 2 Years
Type: HTTP
APISID
Save certain preferences, for example the number of search results per page or activation of the SafeSearch Filter. Adjusts the ads that appear in Google Search.
Expiry: 2 Years
Type: HTTP
SSID
Save certain preferences, for example the number of search results per page or activation of the SafeSearch Filter. Adjusts the ads that appear in Google Search.
Expiry: 2 Years
Type: HTTP
HSID
Save certain preferences, for example the number of search results per page or activation of the SafeSearch Filter. Adjusts the ads that appear in Google Search.
Expiry: 2 Years
Type: HTTP
DV
These cookies are used for the purpose of targeted advertising.
Expiry: 6 Hours
Type: HTTP
NID
These cookies are used for the purpose of targeted advertising.
Expiry: 1 Month
Type: HTTP
1P_JAR
These cookies are used to gather website statistics, and track conversion rates.
Expiry: 1 Month
Type: HTTP
OTZ
Aggregate analysis of website visitors
Expiry: 6 Months
Type: HTTP
_fbp
This cookie is set by Facebook to deliver advertisements when they are on Facebook or a digital platform powered by Facebook advertising after visiting this website.
Expiry: 4 Months
Type: HTTP
fr
Contains a unique browser and user ID, used for targeted advertising.
Expiry: 2 Months
Type: HTTP
bscookie
Used by LinkedIn to track the use of embedded services.
Expiry: 1 Year
Type: HTTP
lidc
Used by LinkedIn for tracking the use of embedded services.
Expiry: 1 Day
Type: HTTP
bcookie
Used by LinkedIn to track the use of embedded services.
Expiry: 6 Months
Type: HTTP
aam_uuid
Use these cookies to assign a unique ID when users visit a website.
Expiry: 6 Months
Type: HTTP
UserMatchHistory
These cookies are set by LinkedIn for advertising purposes, including: tracking visitors so that more relevant ads can be presented, allowing users to use the 'Apply with LinkedIn' or the 'Sign-in with LinkedIn' functions, collecting information about how visitors use the site, etc.
Expiry: 6 Months
Type: HTTP
li_sugr
Used to make a probabilistic match of a user's identity outside the Designated Countries
Expiry: 90 Days
Type: HTTP
MR
Used to collect information for analytics purposes.
Expiry: 1 year
Type: HTTP
ANONCHK
Used to store session ID for a users session to ensure that clicks from adverts on the Bing search engine are verified for reporting purposes and for personalisation
Expiry: 1 Day
Type: HTTP
We do not use cookies of this type.
Cookie declaration last updated on 24/03/2023 by Analytics Vidhya.
Cookies are small text files that can be used by websites to make a user's experience more efficient. The law states that we can store cookies on your device if they are strictly necessary for the operation of this site. For all other types of cookies, we need your permission. This site uses different types of cookies. Some cookies are placed by third-party services that appear on our pages. Learn more about who we are, how you can contact us, and how we process personal data in our Privacy Policy.
Flagship Courses
GenAI Pinnacle Program| GenAI Pinnacle Plus Program| AI/ML BlackBelt Courses| Agentic AI Pioneer Program
Free Courses
Generative AI| Large Language Models| Building LLM Applications using Prompt Engineering| Building Your first RAG System using LlamaIndex| Stability.AI| MidJourney| Building Production Ready RAG systems using LlamaIndex| Building LLMs for Code| Deep Learning| Python| Microsoft Excel| Machine Learning| Decision Trees| Pandas for Data Analysis| Ensemble Learning| NLP| NLP using Deep Learning| Neural Networks| Loan Prediction Practice Problem| Time Series Forecasting| Tableau| Business Analytics
Popular Categories
Generative AI| Prompt Engineering| Generative AI Application| News| Technical Guides| AI Tools| Interview Preparation| Research Papers| Success Stories| Quiz| Use Cases| Listicles
Generative AI Tools and Techniques
GANs| VAEs| Transformers| StyleGAN| Pix2Pix| Autoencoders| GPT| BERT| Word2Vec| LSTM| Attention Mechanisms| Diffusion Models| LLMs| SLMs| StyleGAN| Encoder Decoder Models| Prompt Engineering| LangChain| LlamaIndex| RAG| Fine-tuning| LangChain AI Agent| Multimodal Models| RNNs| DCGAN| ProGAN| Text-to-Image Models| DDPM| Document Question Answering| Imagen| T5 (Text-to-Text Transfer Transformer)| Seq2seq Models| WaveNet| Attention Is All You Need (Transformer Architecture)
Popular GenAI Models
Llama 3.1| Llama 3| Llama 2| GPT 4o Mini| GPT 4o| GPT 3| Claude 3 Haiku| Claude 3.5 Sonnet| Phi 3.5| Phi 3| Mistral Large 2| Mistral NeMo| Mistral-7b| Gemini 1.5 Pro| Gemini Flash 1.5| Bedrock| Vertex AI| DALL.E| Midjourney| Stable Diffusion
Data Science Tools and Techniques
Python| R| SQL| Jupyter Notebooks| TensorFlow| Scikit-learn| PyTorch| Tableau| Apache Spark| Matplotlib| Seaborn| Pandas| Hadoop| Docker| Git| Keras| Apache Kafka| AWS| NLP| Random Forest| Computer Vision| Data Visualization| Data Exploration| Big Data| Common Machine Learning Algorithms| Machine Learning
Continue your learning for FREE
Forgot your password?
Enter email address to continue
Enter OTP sent to
Edit
Enter the OTP
Resend OTP
Resend OTP in 45s
Personalized GenAI Learning Path 2025✨ Crafted Just for YOU!
New Feature Beta
🎲 Make This Article Fun and Interactive with Flash Cards and Quizzes!
- '+next.innerHTML+'
'+faq_answer+'
Frequently Asked Questions
${new_faq_content}
Your comment is awaiting moderation.
"); } $("#viewAllCommentsBtn").on("click", function(){ $("#viewAllComments").find("#commentsLoader").removeClass('d-none'); $.ajax({ url: "https://www.analyticsvidhya.com/wp-admin/admin-ajax.php", type: 'POST', data: { action: 'view_all_comments', post_id: 216026 }, success:function(res){ res = res.slice(0,-1); if(res != ''){ $(".comment-list").append(res); $("#viewAllComments").addClass("d-none"); setupCommentFieldClickListener(); }else{ console.log("No More Comments."); } } }) }) //TOCClicked Event ======= setTimeout(()=>{ const toc_list = document.querySelectorAll('.wp-block-yoast-seo-table-of-contents .accordion-body a'); toc_list.forEach((item,index)=>{ item.addEventListener('click', function(event){ mixpanel.track("TOCClicked",{ "Platform":'Blogs', "Is_Logged_In":is_logged_in, "screen_name":screen_name, "element_text":item.innerText, "link_url":item.getAttribute('href'), "item_number": index + 1, "element_type":'List', "current_url":current_interactive_url }); }) }) //TOCblockClicked document.querySelector('.wp-block-yoast-seo-table-of-contents .accordion-header button')?.addEventListener('click',(event)=>{ mixpanel.track("TOCblockClicked",{ "Platform":'Blogs', "Is_Logged_In":is_logged_in, "screen_name":screen_name, "element_type":'Icon', "element_text":event.target.innerText, "current_url":current_interactive_url, "toc_state": event.target.classList.contains('collapsed')?'closed':'opened' }); }) },1000) //FreeCoursesClicked Event const free_courses_list = document.querySelectorAll('.free-courses-card-row a'); free_courses_list.forEach((item,index)=>{ events.forEach(eventType => { item.addEventListener(eventType, function(event){ const text = item.querySelector('h4'); mixpanel.track("FreeCoursesClicked",{ "Platform":'Blogs', "Is_Logged_In":is_logged_in, "screen_name":screen_name, 'element_type':'Link', "element_text":text.innerText, "link_url":item.getAttribute('href'), "current_url":current_interactive_url }); }) }) }) // AuthorProfileClicked Event on detail page document.querySelectorAll('.detail-page .author-card.d-flex a').forEach((link,index) => { events.forEach(eventType => { link.addEventListener(eventType, function(event) { let link_text = link.innerText if(!link_text){ link_text = link.parentElement.nextElementSibling.firstElementChild.innerText } mixpanel.track("AuthorProfileClicked",{ "Platform":'Blogs', "Is_Logged_In":is_logged_in, "screen_name":screen_name, 'element_text':link_text, 'element_type':'Link', 'link_url':link.getAttribute('href'), "current_url":current_interactive_url, "item_number": index == 0 || index == 1?1:2 }); }); }); }); //InterlinkClicked Event document.querySelectorAll('.detail-page #article-start a').forEach((item,index)=>{ events.forEach(eventType => { item.addEventListener(eventType, function(event){ mixpanel.track("InterlinkClicked",{ "Platform":'Blogs', "Is_Logged_In":is_logged_in, "screen_name":screen_name, "element_text":item.innerText, 'element_type':'text', "link_url":item.getAttribute('href'), "current_url":current_interactive_url }); }) }) }) setTimeout(function(){ // track page view mixpanel.track('PageView', { "Platform":'Blogs', "Is_Logged_In":is_logged_in, "screen_name":screen_name, "current_url": window.location.href, "referrer_url": document.referrer, "interactive_mode" : is_interactive, "learning_path": is_learning_path }); },1000) },1500); // Mixpanel Events Here ----------------------------------------------------------------------------------------------------------------------------- //FlashstripClicked Event document.getElementById('avFlashSale')?.addEventListener('click', function(event) { const aTag = event.target.closest("a") let link_url = aTag.href; let link_text = aTag.children["hrefId"].innerText; mixpanel.track("FlashstripClicked",{ "Platform":'Blogs', "Is_Logged_In":is_logged_in, "screen_name":screen_name, 'element_type':'Button', 'element_text':link_text, 'link_url':link_url, "current_url":current_interactive_url }); }); //AvlogoClicked Event events.forEach(eventType => { document.querySelector('header .navbar-brand')?.addEventListener(eventType, function(event) { const link_url = document.querySelector('header .navbar-brand').getAttribute('href'); if(link_url){ mixpanel.track("AvlogoClicked",{ "Platform":'Blogs', "Is_Logged_In":is_logged_in, "screen_name":screen_name, 'element_type':'Logo', 'element_text':'', "link_url":link_url, "current_url":current_interactive_url }); } }); }); //WriteBlogClicked Event events.forEach(eventType => { document.getElementById('writeHeaderBtn')?.addEventListener(eventType, function(event) { mixpanel.track("WriteBlogClicked",{ "Platform":'Blogs', "Is_Logged_In":is_logged_in, "screen_name":screen_name, 'element_type':'Icon', 'element_text':'', "current_url":current_interactive_url }); }); }) //SearchButtonClicked Event events.forEach(eventType => { document.getElementById('searchHeaderBtn')?.addEventListener(eventType, function(event) { mixpanel.track("SearchButtonClicked",{ "Platform":'Blogs', "Is_Logged_In":is_logged_in, "screen_name":screen_name, 'element_type':'Icon', 'element_text':'', "current_url":current_interactive_url }); }); }); //CategoryClicked Event document.querySelectorAll('.category-bar-header a').forEach((item,index)=>{ events.forEach(eventType => { item.addEventListener(eventType, function(event){ mixpanel.track("CategoryClicked",{ "Platform":'Blogs', "Is_Logged_In":is_logged_in, "screen_name":screen_name, "element_text":item.innerText, 'element_type':'text', "link_url":item.getAttribute('href'), "current_url":current_interactive_url }); }) }); }) //SocialMediaIconClicked Event document.querySelectorAll('.bottom-footer > a.icon-wrapper').forEach((item,index)=>{ events.forEach(eventType => { item.addEventListener(eventType, function(event){ const text = item.getAttribute('aria-label'); mixpanel.track("SocialMediaIconClicked",{ "Platform":'Blogs', "Is_Logged_In":is_logged_in, "screen_name":screen_name, "element_text":"", "element_type":"Icon", "link_url":item.getAttribute('href'), "current_url":current_interactive_url, "item_type":text }); }) }) }) //TnCClicked Event document.querySelectorAll('.bottom-footer p a').forEach((item,index)=>{ events.forEach(eventType => { item.addEventListener(eventType, function(event){ mixpanel.track("TnCClicked",{ "Platform":'Blogs', "Is_Logged_In":is_logged_in, "screen_name":screen_name, "element_text":item.innerText, 'element_type': 'Text', "link_url":item.getAttribute('href'), "current_url":current_interactive_url }); }) }); }) //NavbarClicked Event document.querySelectorAll('#offcanvasNavbar2 .navbar-nav li a').forEach(link => { events.forEach(eventType => { link.addEventListener(eventType, function(event) { mixpanel.track("NavbarClicked",{ "Platform":'Blogs', "Is_Logged_In":is_logged_in, "screen_name":screen_name, 'element_text':link.innerText, 'element_type':'text', 'link_url':link.getAttribute('href'), "current_url":current_interactive_url }); }); }) }); // FooterClicked Event document.querySelectorAll('footer .row a, #seoFooter a').forEach(link => { events.forEach(eventType => { link.addEventListener(eventType, function(event) { let perent = event.target.closest('div'); let item_type = perent.getElementsByTagName("h2")[0].innerText; mixpanel.track("FooterClicked",{ "Platform":'Blogs', "Is_Logged_In":is_logged_in, "screen_name":screen_name, 'element_text':link.innerText, 'element_type':'Text', 'link_url':link.getAttribute('href'), "current_url":current_interactive_url, "item_type":item_type }); }); }); }); // AuthorProfileClicked Event document.querySelectorAll('.container .card-footer.border-0 > a').forEach(link => { events.forEach(eventType => { link.addEventListener(eventType, function(event) { mixpanel.track("AuthorProfileClicked",{ "Platform":'Blogs', "Is_Logged_In":is_logged_in, "screen_name":screen_name, 'link_text':link.innerText, 'link_url':link.getAttribute('href'), "current_url":current_interactive_url, "item_number": "Not Set" }); }); }); }); // LoginClicked Event document.getElementById('loginModal')?.addEventListener('shown.bs.modal', function () { if(loginOpen){ mixpanel.track('LoginClicked', { "Platform":'Blogs', "Is_Logged_In":is_logged_in, "screen_name":screen_name, 'current_url': current_interactive_url, 'login_screen_name': (login_screen_name?login_screen_name:'Navbar') }); loginOpen = false } }) // SkipbuttonClicked Event document.querySelector('#loginModal .modal-header button')?.addEventListener('click', function (event) { mixpanel.track('SkipbuttonClicked', { "Platform":'Blogs', "Is_Logged_In":is_logged_in, "screen_name":screen_name, 'current_url': current_interactive_url, 'login_screen_name': (login_screen_name?login_screen_name:'Navbar'), 'link_url': '', 'element_text': event.target.innerText, 'element_type':'Button', 'modal_name': modal_name }); login_screen_name = '' loginOpen = true }) // Signup Event document.getElementById('email-login-btn')?.addEventListener('click', function (event) { mixpanel.track('Signup', { "Platform":'Blogs', "Is_Logged_In":is_logged_in, "screen_name":screen_name, 'current_url': current_interactive_url, 'login_screen_name': (login_screen_name?login_screen_name:'Navbar'), 'link_url': '', 'element_text': event.target.innerText, 'element_type':'Button', 'modal_name': modal_name, 'signup_mode': 'Email' }); }) // ReceiveupdatesClicked Event document.getElementById('isWhatsappLoginModal')?.addEventListener('click', function (event) { mixpanel.track('ReceiveupdatesClicked', { "Platform":'Blogs', "Is_Logged_In":is_logged_in, "screen_name":screen_name, 'current_url': current_interactive_url, 'login_screen_name': (login_screen_name?login_screen_name:'Navbar'), 'link_url': '', 'element_text': '', 'element_type':'Checkbox', 'modal_name': modal_name, 'receive_updates': event.target.checked }); })