<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Marketing Research</title>
	<atom:link href="https://rosecreative.marketing/category/marketing-research/feed/" rel="self" type="application/rss+xml" />
	<link>https://rosecreative.marketing</link>
	<description>Rose Creative Marketing</description>
	<lastBuildDate>Tue, 11 Mar 2025 10:18:10 +0000</lastBuildDate>
	<language>en-GB</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.1.10</generator>

 
	<item>
		<title>Marketing Research: What’s Real, What’s Garbage–a Refresher for Marketers</title>
		<link>https://rosecreative.marketing/marketing-research-whats-real-whats-garbage-a-refresher-for-marketers/</link>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Tue, 11 Mar 2025 10:18:09 +0000</pubDate>
				<category><![CDATA[Insight]]></category>
		<category><![CDATA[Marketing Research]]></category>
		<category><![CDATA[Marketing Strategy]]></category>
		<category><![CDATA[Rose Creative Marketing]]></category>
		<guid isPermaLink="false">https://rosecreative.marketing/?p=41117</guid>

					<description><![CDATA[Many brands turn to surveys and focus groups hoping for a glimpse into the future, treating research like...]]></description>
										<content:encoded><![CDATA[
<p class="has-medium-font-size">Many brands turn to surveys and focus groups hoping for a glimpse into the future, treating research like an infallible crystal ball that can validate their marketing decisions. But research isn’t magic, and it’s far from foolproof. From misleading percentages to meaningless sample sizes, marketers constantly misuse data—often without even realizing it. This guide breaks down how to read research properly, avoid common statistical traps, and stop making decisions based on numbers that don’t actually mean anything.</p>



<p>British Prime Minister Benjamin Disraeli apparently once said: “There are three kinds of lies: lies, damned lies, and statistics.” And I don’t disagree with ol’ Ben.</p>



<p>Numbers, as trustworthy as they seem, can be twisted, cherry-picked, and misrepresented until they say whatever you want them to say. And if you think you’re too savvy to be hoodwinked, there’s a bridge I’d like to sell you.</p>



<p>I’m not saying most people deliberately mislead with statistics. I’m saying research is, at best, flawed—and at worst, manipulated, whether by design or incompetence, to serve an agenda. It’s flawed because the act of asking questions alters responses (the Hawthorne Effect), sample sizes are often too small, or questions are framed to skew results. It’s manipulated when data is presented as fact, but shaped to tell a convenient fiction.</p>



<p>The COVID-19 pandemic was a goldmine for bad predictions. Early in the crisis, research firms confidently declared that remote work would permanently replace office culture, that people would stop dining out, and that retail was dead. Fast forward a few years, and many office buildings are full, restaurants have waitlists, and retail is thriving. It turns out that asking people what they think they’ll do in the future is wildly unreliable.</p>



<p>Then there’s the streaming industry, which completely misread subscriber loyalty. Netflix, Disney+, and HBO Max all assumed their research showed that customers would tolerate price hikes. Instead, millions of users canceled their subscriptions. People say they’ll stay, but when their credit card gets charged, reality kicks in.</p>



<p>Political polling is another prime example of research gone wrong. The last U.S. presidential election was yet another reminder that pollsters can get it horribly wrong—again. Pre-election numbers confidently predicted one outcome, only for election night to tell a different story. The same thing happened in the UK, where polls repeatedly underestimated Conservative Party support. Why? The same reasons marketing research fails: bad sampling, self-selection bias, misleading questions, and overconfidence in small differences. I only bring this up because if the top US pollsters—armed with decades of experience and millions in funding—can still botch an election forecast, do you really think your brand’s toothpaste survey is airtight?</p>



<figure class="wp-block-image size-large"><img decoding="async" width="1024" height="654" src="https://rosecreative.marketing/wp-content/uploads/2025/03/Netflix-min-1024x654.png" alt="" class="wp-image-41090" srcset="https://rosecreative.marketing/wp-content/uploads/2025/03/Netflix-min-1024x654.png 1024w, https://rosecreative.marketing/wp-content/uploads/2025/03/Netflix-min-300x192.png 300w, https://rosecreative.marketing/wp-content/uploads/2025/03/Netflix-min-768x491.png 768w, https://rosecreative.marketing/wp-content/uploads/2025/03/Netflix-min-1536x982.png 1536w, https://rosecreative.marketing/wp-content/uploads/2025/03/Netflix-min-2048x1309.png 2048w" sizes="(max-width: 1024px) 100vw, 1024px" /><figcaption class="wp-element-caption"><em>Netflix, Disney+, and HBO Max relied on research to justify price hikes—millions canceled instead. What people say and do rarely match</em>.</figcaption></figure>



<p>So, given my skepticism, why am I writing an article about how marketers should use statistics correctly? Because too many rely on research without really understanding how stats work—and if you&#8217;re going to use them, you should at least use them right.</p>



<p><strong>What Research Can (and Can’t) Do for Marketers</strong></p>



<p>Research can be useful for spotting broad trends over time. If your customer satisfaction score has been dropping steadily for two years, that’s a real trend worth investigating. But if it dipped by 3% last month, don’t panic—it’s probably just noise. It’s also great for tracking shifts in behavior over time. If your brand was beloved in 2022 but is now struggling, research can help pinpoint when and why perceptions started to change.</p>



<p>It’s also a somewhat useful tool for generating hypotheses—showing you what’s worth investigating, rather than dictating your next move. If data suggests Gen Z is spending more on skincare, it’s worth exploring, but it doesn’t mean you should launch a moisturizer tomorrow.</p>



<p>That brings us to an important distinction: research is best for testing messaging, not ideas. Many of the best marketing ideas would have been murdered in a focus group. However, if research shows that people respond more positively to the word “premium” than “luxury,” that’s a useful insight you can apply without gutting a creative concept.</p>



<p>The real problem starts when marketers misuse research, expecting it to serve as a crystal ball. People are notoriously unreliable when predicting their future behavior. Ask them if they plan to go to the gym three times a week next year, and most will say yes. Check back in February, and their gym card is collecting dust.</p>



<p>Another major misuse of research is using it to justify a bad decision after the fact. If you’re cherry-picking data to validate a choice you already made, you’re not doing research—you’re doing PR. The worst offenders, however, are those who let research replace instinct and creativity altogether. No survey would have led to Tesla ditching traditional advertising altogether, and no focus group would have approved Ryanair’s counterintuitive, unfiltered (and very popular) strategy of openly mocking its own customers on social media. Some of the greatest marketing decisions have gone against the data.</p>



<p><strong>How Research Gets Misused: Common Mistakes and How to Spot Them</strong></p>



<p>If you’re going to use research, you’d better know how to conduct and analyze it properly. Here are a few things to know to make your research more useful and some of the most common (and most dangerous) pitfalls to watch out for.</p>



<p>1. <em>The More Agreement, the More Reliability</em><br>If you conduct a survey and nearly everyone agrees on something, congratulations—you might actually have a reliable result. The Law of Large Numbers tells us that as a sample size grows, the average of the collected data gets closer to the true average of the population. In plain English, if you survey enough people and they overwhelmingly agree, you’re probably onto something. But if the responses are all over the place, your data is about as reliable as a drunk guy guessing lottery numbers. That’s why a survey where 90% of respondents agree on something is far more useful than one where opinions are scattered.</p>



<p>2. <em>The Magic Number for Any Universe: 1,067</em><br>Bigger sample sizes don’t always mean better accuracy. Shouldn’t you survey millions of people for the best results? Not really. The magic number you need is 1,067. Why this oddly specific number? Because 1,067 respondents give you a ±3% margin of error at a 95% confidence level, no matter how big the total population is. Whether you’re surveying an entire country or everyone who’s ever rage-tweeted about airline delays, you don’t need millions of responses—just the right 1,067. For reference: 500 respondents = ±4.5% margin of error (less precise). 10,000 respondents = ±1% margin of error (more precise but with diminishing returns). This is why 1,067 is the sweet spot—big enough for accuracy, small enough to avoid lighting your research budget on fire. Of course, the sample has to be representative in terms of gender, age, etc.</p>



<p>By now your eyes may be glazing over.  But if you&#8217;re interested, here is the formula for margin of         error.</p>


<div class="wp-block-image">
<figure class="aligncenter size-large"><img decoding="async" loading="lazy" width="1024" height="361" src="https://rosecreative.marketing/wp-content/uploads/2025/03/formula4-1024x361.png" alt="" class="wp-image-41103" srcset="https://rosecreative.marketing/wp-content/uploads/2025/03/formula4-1024x361.png 1024w, https://rosecreative.marketing/wp-content/uploads/2025/03/formula4-300x106.png 300w, https://rosecreative.marketing/wp-content/uploads/2025/03/formula4-768x271.png 768w, https://rosecreative.marketing/wp-content/uploads/2025/03/formula4-1536x542.png 1536w, https://rosecreative.marketing/wp-content/uploads/2025/03/formula4-2048x723.png 2048w" sizes="(max-width: 1024px) 100vw, 1024px" /><figcaption class="wp-element-caption">Where: <strong>Z</strong> is the Z-score for the desired confidence level (for 95%, Z ≈ 1.96); <strong>p</strong>&nbsp;is the estimated proportion of the population that gives a particular response (usually assumed to be 0.5 or 50% because it gives the maximum variance, meaning the most conservative estimate); <strong>n</strong>&nbsp;is the sample size.</figcaption></figure></div>


<p>3. The Great Data Slice-and-Dice Disaster<br>Breaking data into smaller subgroups is like slicing a cake—you start with a full, delicious picture, but the more pieces you cut, the smaller and less satisfying each bite becomes</p>



<p>A&nbsp;UAE-wide survey of 1,067 people&nbsp;gives you a reliable ±3% margin of error. But after you conduct that survey, if you break it down by&nbsp;each emirate, suddenly your sample sizes shrink along with its reliability. If Dubai accounts for&nbsp;40% of respondents, that’s only&nbsp;427 people, pushing the margin of error closer to&nbsp;±5%.&nbsp;Adu Dhabi, with&nbsp;15% of respondents, now has just&nbsp;160 people, sending the margin of error&nbsp;past ±8%.&nbsp;Keep slicing further—say, by&nbsp;age, gender, or left-handedness&nbsp;(because why not?)—and suddenly, your data is less science, more astrology.&nbsp;The smaller and more obscure the sample, the shakier the conclusion.</p>



<p>4. The Oven-and-Ice Problem: Why Percentages Can Lie<br>One of the worst offenses in marketing research is playing fast and loose with percentages</p>



<p>There’s an old joke about a statistician who has his head in an oven and his feet in ice, yet insists that, on average, he feels fine. This is exactly how marketers present data—averaging wildly different numbers and pretending they tell a coherent story. If a survey claims&nbsp;“30% of users clicked our ad,”&nbsp;the obvious first question should always be:&nbsp;30% of what?&nbsp;Five people out of 15? 3,000 out of 10,000? A percentage without raw numbers is meaningless. A company might say its&nbsp;conversion rate jumped 50%,&nbsp;but if that just means it went from&nbsp;two customers to three, that’s not exactly cause for celebration.</p>



<p>5. <em>Margin of Error: Why Tiny Differences Don’t Matter</em><strong><em><br></em></strong>Marketers love obsessing over small shifts in numbers, but if the change is within the&nbsp;margin of error, it’s not a real change—it’s noise. If a survey finds that&nbsp;“37% of customers prefer Brand X (±3%)”,&nbsp;that means the real number could be anywhere from&nbsp;34% to 40%.&nbsp;If you’re making decisions based on a&nbsp;1% or 2% difference when your margin of error is ±3%,&nbsp;congratulations—you’re reading statistical tea leaves. If the margin of error is bigger than the difference, your conclusion is built on statistical quicksand.</p>



<p>6. <em>Sampling Error: Why Small Changes Can Be Misleading</em><strong><em><br></em></strong>Even if you follow all the rules, research isn’t perfect. If you run a survey today and repeat it with a different (but equally structured) group tomorrow,&nbsp;you’ll still get slightly different results.&nbsp;That’s called&nbsp;sampling error, and it’s why minor shifts in survey data shouldn’t send you into a panic. Marketers often freak out over tiny dips in survey numbers. If customer satisfaction was at&nbsp;82% last quarter&nbsp;and&nbsp;dropped to 80% this quarter, is that a real issue? Not necessarily. It could just be&nbsp;<em>noise</em>&nbsp;in the data. Instead of obsessing over minor fluctuations,&nbsp;watch for trends over time.&nbsp;A slow, steady decline?&nbsp;That’s a red flag.&nbsp;A one-time&nbsp;2% drop?&nbsp;Probably not worth losing sleep over.</p>



<p>7. <em>The Danger of Self-Selection Bias</em><strong><em><br></em></strong>If you let people&nbsp;choose&nbsp;whether to participate in a survey, congratulations—you’ve just created biased data. People who&nbsp;willingly&nbsp;take surveys are not a&nbsp;random&nbsp;cross-section of the population. They’re typically:&nbsp;1. Extremely happy 2. Extremely angry 3. Extremely bored.&nbsp;This is why&nbsp;online reviews&nbsp;are&nbsp;terrible for accurate research. The neutral majority?&nbsp;They don’t bother.&nbsp;The same thing happens with&nbsp;political polling. Certain voter groups&nbsp;are&nbsp;less likely to respond, skewing results. This is why&nbsp;pollsters have been embarrassingly wrong&nbsp;in multiple elections.&nbsp;I mean, ask yourself this: who answers a call from an unknown number these days?&nbsp;</p>



<p><em>8.</em><strong><em> </em></strong><em>The Perils of Leading Questions and Confirmation Bias</em><strong><em><br></em></strong>“Wouldn’t you agree that our product is revolutionary?” That’s not research. That’s&nbsp;manipulation. Leading questions&nbsp;push respondents toward a particular answer,&nbsp;whether they realize it or not. It’s one of the easiest ways to ruin a survey, and&nbsp;marketers do it all the time. Then there’s&nbsp;<em>confirmation bias</em>, where marketers&nbsp;interpret data&nbsp;in ways that support what they&nbsp;already believe. If you want to prove that customers&nbsp;love your product, you’ll find data that&nbsp;supports it—ignoring the parts that don’t.&nbsp;Good research asks neutral questions and accepts results—even when they aren’t what you want.</p>



<p>9. <em>Statistical Significance vs. Practical Significance</em><strong><em><br></em></strong>Just because a result is statistically significant doesn’t mean it’s useful. Marketers often get excited about a tiny percentage difference and assume it means something important. Imagine a brand tests two ad headlines, and&nbsp;Headline A gets 51% engagement&nbsp;while&nbsp;Headline B gets 49%. Statistically, that might be&nbsp;significant. But does a&nbsp;2% difference&nbsp;justify an entire campaign overhaul? Probably not. Statistical significance simply means the result&nbsp;isn’t random—not that it’s meaningful. The better question is:&nbsp;Does this difference actually matter for business?&nbsp;If not, move on.</p>



<p>10. <em>Correlation vs. Causation: The Oldest Statistical Sin</em><strong><em><br></em></strong>Just because two numbers move together doesn’t mean one causes the other. Ice cream sales and drowning deaths both rise in summer, but no one’s blaming the Rocky Road. Marketers fall for this all the time. A study may find that&nbsp;people who drink coffee are more successful, but does that mean coffee creates success? Or are ambitious, hardworking people just more likely to need caffeine? Any claim that&nbsp;“X causes Y”&nbsp;should immediately raise red flags. If the data doesn’t prove&nbsp;direct causation,&nbsp;assume someone’s trying to make a weak argument sound impressive.</p>



<p><strong>How to Not Get Tricked by Bad Stats</strong></p>



<p>Use research to spot trends, not predict the future. Never base big decisions on tiny differences. Always check sample size and margin of error. Be skeptical of overly specific breakdowns. Correlation is not causation. If the numbers look weird, they probably are. And just remember: 87.5% of all statistics are made up. Including that one.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>What’s Next for Marketing Research: Are Humans Still in the Game?   </title>
		<link>https://rosecreative.marketing/whats-next-for-marketing-research-are-humans-still-in-the-game/</link>
		
		<dc:creator><![CDATA[John Rose]]></dc:creator>
		<pubDate>Mon, 18 Nov 2024 11:14:52 +0000</pubDate>
				<category><![CDATA[Expertise]]></category>
		<category><![CDATA[Insight]]></category>
		<category><![CDATA[Marketing Research]]></category>
		<category><![CDATA[AI content]]></category>
		<category><![CDATA[Consumer Insights]]></category>
		<category><![CDATA[Research Trends]]></category>
		<category><![CDATA[Rose Creative Marketing]]></category>
		<guid isPermaLink="false">https://rosecreative.marketing/?p=40887</guid>

					<description><![CDATA[As AI continues to infiltrate every facet of business, marketing research stands at a crossroads between human intuition...]]></description>
										<content:encoded><![CDATA[
<p class="has-medium-font-size">As AI continues to infiltrate every facet of business, marketing research stands at a crossroads between human intuition and machine precision.</p>



<p>Now that AI has permeated every aspect of business and the pace of its development suggests it’s coming for all our jobs, it’s easy to imagine a future where marketing research is entirely automated. With its vast computing power, AI could soon dominate this field, processing data faster and more accurately than human teams ever could.</p>



<p>I should probably preface this article by admitting that I’ve always harbored a distinct mistrust for much of the research brands conduct around the marketing space, particularly regarding its impact on creative output. In my experience, research is often a sanity check—a way to justify or rule out ideas. It may suggest what not to do but rarely provides actionable insights on what to pursue. If research were that effective, we’d live in a world of only hit songs, blockbuster movies, best-selling novels, and advertising campaigns that never miss.</p>



<p>I’m also drawn to the Observer Effect, which suggests that the simple act of observation may influence the subject being observed. Simply put, &#8220;The act of asking the question changes the answer.&#8221; This inherent limitation underscores the challenges of relying solely on research, even as AI promises to eliminate human biases (though maybe insert a few of its own).</p>



<p>AI has already disrupted industries by automating tasks previously thought to require human intuition and creativity. In marketing research, it promises to transform how we collect, analyze, and apply data. From consumer sentiment analysis to campaign optimization, the technology seems unstoppable. But can it truly replace human oversight?</p>



<p><strong>The Current Necessity of Human Expertise in Marketing Research</strong></p>



<p>While AI excels at data processing, humans remain crucial for interpreting nuanced insights, especially in qualitative research. For instance, Procter &amp; Gamble relies on human analysts to decode cultural nuances in global campaigns, ensuring messages resonate across diverse markets.</p>



<p>Even in numbers-driven industries, human involvement is critical. According to a 2023 Forrester report, 68% of companies that use AI for market research still rely on human teams to validate insights and translate them into actionable strategies.</p>



<p>Moreover, AI algorithms often inherit biases from their training data. Human oversight is essential to identify and correct these biases. For example, Facebook’s ad-targeting AI was found to skew results based on racial and gender biases in a 2023 audit, prompting the platform to double its hum an oversight team.</p>



<figure class="wp-block-image size-large"><img decoding="async" loading="lazy" width="1024" height="576" src="https://rosecreative.marketing/wp-content/uploads/2024/11/NIKE-min-1-1024x576.png" alt="" class="wp-image-40885" srcset="https://rosecreative.marketing/wp-content/uploads/2024/11/NIKE-min-1-1024x576.png 1024w, https://rosecreative.marketing/wp-content/uploads/2024/11/NIKE-min-1-300x169.png 300w, https://rosecreative.marketing/wp-content/uploads/2024/11/NIKE-min-1-768x432.png 768w, https://rosecreative.marketing/wp-content/uploads/2024/11/NIKE-min-1-1536x864.png 1536w, https://rosecreative.marketing/wp-content/uploads/2024/11/NIKE-min-1.png 1920w" sizes="(max-width: 1024px) 100vw, 1024px" /><figcaption class="wp-element-caption"><em>Nike used AI data to identify trending themes for its 2023 campaign &#8220;You Can’t Stop Us&#8221; but relied on human creativity to deliver its powerful message.</em></figcaption></figure>



<p>Creativity thrives on intuition and risk, areas where research alone falls short. Nike’s campaigns balance AI-driven insights with human-led storytelling, crafting narratives that connect emotionally with audiences worldwide. Their 2023 &#8220;You Can’t Stop Us&#8221; campaign integrated AI data to identify trending themes but relied heavily on human creativity to deliver its powerful, unifying message.</p>



<p><strong>How AI is Revolutionizing Marketing Research</strong><br>Platforms like Oomiji integrate open-ended questions and provide real-time analysis, challenging traditional focus groups by offering deeper, more immediate insights. In a recent case study, Oomiji helped a mid-sized retailer boost customer satisfaction by 35% by identifying previously overlooked concerns.</p>



<p>Similarly, Qualtrics reported that companies using their AI-driven tools cut research timelines by 40% while increasing actionable insights by 25% in 2023. SurveyMonkey has rolled out advanced AI analytics, allowing businesses to generate reports in under an hour, a process that once took weeks.</p>



<p>The future of focus groups may lie in virtual environments led by AI, offering cost savings and scalability. In 2023, Coca-Cola piloted VR-based focus groups, which provided deeper behavioral insights, reducing campaign misalignment by 30%.</p>



<p>AI tools offer instant tabulation of survey data, delivering polished reports in minutes. However, despite their efficiency, these reports often miss contextual nuances. A 2023 study by McKinsey revealed that 48% of companies using automated tools faced misinterpretations in at least one major campaign, underscoring the need for human oversight.</p>



<p><strong>The Potential for Full Automation in the Future</strong><br>Advanced AI can predict consumer trends with remarkable accuracy, potentially eliminating the need for human hypothesis testing. Case in point: Amazon’s predictive analytics not only improved product recommendations but also increased conversion rates by 22% in 2023.</p>



<figure class="wp-block-image size-full"><img decoding="async" loading="lazy" width="936" height="624" src="https://rosecreative.marketing/wp-content/uploads/2024/11/Wendy-min.png" alt="" class="wp-image-40886" srcset="https://rosecreative.marketing/wp-content/uploads/2024/11/Wendy-min.png 936w, https://rosecreative.marketing/wp-content/uploads/2024/11/Wendy-min-300x200.png 300w, https://rosecreative.marketing/wp-content/uploads/2024/11/Wendy-min-768x512.png 768w" sizes="(max-width: 936px) 100vw, 936px" /><figcaption class="wp-element-caption"><em>Wendy’s faced significant backlash in 2023 when it announced plans to test AI-driven dynamic pricing and digital menu boards in its U.S. restaurants.</em></figcaption></figure>



<p>AI platforms are also starting to develop creative content based on data-driven insights, such as ad copy and design variations. Levi Strauss experimented with AI-generated designs last year, saving 15% on production costs while maintaining high consumer approval rates.</p>



<p>However, the leap to full automation isn’t without hurdles. Gartner projects that only 12% of organizations will achieve fully automated marketing research workflows by 2025, primarily due to trust and ethical concerns.</p>



<p><strong>The Limits of AI Today: Trust, Accuracy, and Ethics</strong><br>Despite its capabilities, AI struggles with transparency. How it arrives at conclusions is often a black box, raising trust issues. According to a recent PwC survey, 72% of business leaders expressed concern over the opacity of AI-driven insights.</p>



<p>Errors in automated sentiment analysis can mislead campaign strategies. For example, Wendy’s faced significant backlash in 2023 after AI misinterpreted customer sentiment, leading to a poorly received promotion that cost the company $10 million. The misstep highlighted the dangers of relying too heavily on AI without human checks.</p>



<p><strong>Embracing a Hybrid Future</strong><br>While AI will continue to enhance marketing research, human expertise will remain critical for ensuring accuracy, creativity, and ethical compliance. Businesses must embrace AI while maintaining a strong human element, ensuring the best of both worlds in their marketing research endeavors.</p>



<p class="has-small-font-size">Sources:<br><em>• Statista, “AI Use in Marketing,” 2024.<br>• Forrester, “AI in Market Research,” 2023.<br>• PwC, “Trust in AI Systems,” 2024.<br>• McKinsey, “State of AI,” 2023.</em></p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
