Tag: Artificial Intelligence

How Social Media Giants Leverage Big Data And ML To Serve Users Better

The growth in social network popularity continues posthaste. As of 2018, the number of social media users exceeded 3 billion, and it doesn’t seem the situation is going to change overnight.

To get people hooked and deliver wow user experiences, Facebook, YouTube, LinkedIn, and other big players apply the cutting edge of technology, with big data solutions being the go-to option. Underpinned by artificial intelligence (AI) and machine learning (ML), these solutions let social media thoroughly analyze large amounts of user data, derive actionable insights, and, in turn, deliver hyper-personalized offerings.

And this is just one example of how machine learning solutions can be implemented in the social network environment. Read further to find out how giants like Instagram, Twitter, and Reddit are taking this advanced tech up another notch.

Instagram: In a fight against trolling

Coming in sixth on the list of most popular social networks worldwide, Instagram aims to make the platform as tolerable as possible. For this purpose, they capitalize on DeepText, Facebook’s “learning-based text understanding engine that can comprehend, with near-human accuracy, the textual content of several thousand posts per second.”

Before going live, the system was trained on at least two million comments and categorized them into segments like “bullying, racism, or sexual harassment.” Now, users just have to turn on automatic and manual filters in their account settings if they want to activate offensive comment functionality.

Image source: geek.com

To determine tone and intention, i.e. give the target word or phrase an appropriate interpretation and distinguish between abusive language and constructive criticism (across cultures and languages), Instagram’s AI also carefully studies the contextual meaning of surrounding words.

Besides, DeepText assists Instagram in detecting spam. Empowered by huge data assets and human input, the system identifies fake accounts and cleans up their spam comments on posts and live videos. This feature is currently available in nine languages, but the social media behemoth is working toward expanding this list.

To improve its AI system’s accuracy and avoid becoming an over-sanitized platform, Instagram continues gathering and analyzing new data sets.

Twitter: A step toward engaging users

Twitter, another social media giant, banks on ML to make the grade in image cropping. By using data from eye trackers, Twitter trains its neural networks to predict the areas users might want to look at — which are usually faces, text, animals, and other salient image regions.

As neural networks for saliency prediction tend to be too slow and cumbersome to make smart auto-cropping in real time, Twitter splits the process by using two techniques. The first one, knowledge distillation, is employed to train a smaller network to imitate the more powerful one and make a prediction based on a set of images and third-party salient data. The second technique, Fisher pruning, is used to delete features or parameters that are in some sense redundant, while lowering the computational cost.

Such a smart combination allows Twitter to obtain much more runtime-efficient architectures for saliency prediction and to crop images as soon as they’re uploaded — 10x faster than in a vanilla approach. This makes the uploaded photos more engaging and positively impacts the overall user experience.

Below is an example of how Twitter’s shift from a face detection to a saliency prediction algorithm redefined image cropping.

Image source: blog.twitter.com

Reddit: In a bid to improve website search

For Reddit — a vivid hub of internet news, pics, stories, memes, and videos — advanced search is of top priority. So it stands to reason the social media giant implements the best of tech to increase its searching capabilities and provide users with a custom-fit stream of high-quality content.

Aimed to make its search relevant, fast, and easy to scale with the platform’s growth, Reddit employs Lucidworks’ AI-based platform called Fusion. This helps the company successfully tackle the challenge of updating their indexing pipeline — by pulling together data from several sources into one cohesive canonical view. Also, Reddit not only indexes new post creations, but also updates their relevance signals in real time — based on votes, comments, etc.

The partnership with Lucidworks has given Reddit impressive results:
1. There was a 33% increase in posts indexed.
2. The reindex of all the website content slashed from 11 to 5 hours.
3. The error rate was down by two orders of magnitude, with 99% of search results served in under 500ms.
4. The number of machines needed to run search dropped from 200 to 30.

On top of that, Reddit excelled in boosting user experience and keeping operational costs down. Here’s how the tech stack of the revitalized search platform looks like now:

Image source: redditblog.com

A final word

From crafting personalized offers to fighting spam to enhancing search, machine learning delivers business value to an array of social media platforms. Facebook, Instagram, Twitter, and others have already found the ML-enabled solution to reap these benefits. Have you?

Join 100,000+ fellow marketers who advance their skills and knowledge by subscribing to our weekly newsletter.

The post How Social Media Giants Leverage Big Data And ML To Serve Users Better appeared first on Social Media Week.

http://socialmediaweek.org/blog/2019/01/how-social-media-giants-leverage-big-data-and-ml-to-serve-users-better/

AI Researchers Warn of Urgent Need of Algorithm Supervision

It has been yet another drastic year for Artificial Intelligence. The public has applauded its achievements, but also couldn’t help but frown upon its faults.

Facebook has gone under fire for facilitating a genocide in Myanmar; Google was accused of building a censored version of their search engine for the Chinese government; not to mention the Cambridge Analytica data scandal which got under every Facebook user’s skin. The list goes on.

It seems like the public is in a blind spot, where they don’t know enough about AI, but are the victims of anything AI has done wrong. This is the focus of the annual report just launched by AI Now, a research group made up of employees from tech companies including Microsoft and Google.

The report elaborates on what the researchers termed as “the accountability gap,” meaning there are many social challenges of AI and its algorithms that are urgent to be addressed but remain unregulated because the public lacks the tools or knowledge to hold tech giants accountable. It also puts forward recommendations on the steps to be taken to address these problems.

The case studies showed in this report are nerve-racking. According to the report, throughout the year, it has been actual people who suffered from the fails of experimental AI systems. In March, AI-powered cars killed drivers and pedestrians; in May, an AI voice recognition system developed in the UK falsely detected immigration frauds, canceled thousands of visas and deported people; in July, IBM’s Watson system gave “unsafe and incorrect treatment recommendations.” It’s horrifying to think that a lot more cases remain unreported.

Usually, AI software is introduced into public domains with the purpose of cutting costs and increasing efficiency. However, results from these actions are systems making decisions that can neither be explained, nor appealed. “A lot of their claims about benefit and utility are not backed by publicly accessible scientific evidence,” AI Now’s co-founder Meredith Whittaker told The Verge.

The report puts forward ten recommendations on securing a healthier future of AI, among them are the need to have more precise supervision systems, and matching marketing promises to reality.

First, sector-specific agencies need to be put in place to oversee, audit and monitor tech companies that are developing new systems. The report claims that a nationwide, standardized AI monitor model will not meet the specific requirements needed for detailed regulation, especially since domains like health, education, criminal justice, etc. all have their own frameworks, hazards and nuances. For marketers, when implementing AI systems to different clients, this should also be kept in mind so that technologies will not be turned against us.

Second, marketing promises should be accurate when it comes to AI products and services. This especially applies to consumer protection agencies to ensure “truth-in-advertising” laws to AI products. The reports warns AI vendors to be held to even higher standards for what they can promise, especially since the scientific evidence that are supposed to be backing these promises is still inadequate.

Learn the latest trends, insights and best practices from the brightest minds in media and technology. Sign up for SMW Insider to watch full-length sessions from official Social Media Week conferences live and on-demand.

The post AI Researchers Warn of Urgent Need of Algorithm Supervision appeared first on Social Media Week.

http://socialmediaweek.org/blog/2018/12/ai-researchers-warn-of-urgent-need-of-algorithm-supervision/

Here Are 3 Things Mark Zuckerberg Says He Learned About Artificial Intelligence

What if your security camera could not only see who’s at your door, but also identify whether it’s a guest you’re expecting, alert you when they arrive, and let them in? Or how about a speaker system that automatically plays music as your child wakes up? That’s the type of functionality Facebook CEO Mark Zuckerberg…

http://fortune.com/2016/12/19/mark-zuckerberg-artificial-intelligence/

Facebook Pulls 9/11 Anniversary Topic after Promoting Conspiracy Article

In the latest misstep attributed to its newly-automated Trending Topics feed, Facebook pulled a topic dedicated to the anniversary of September 11th after an article appearing to support ‘Truther’ conspiracy theories topped the feed. The article, from the UK’s Daily Star, claimed to feature “footage that ‘proves bombs were planted in the Twin Towers.'” It…

http://fortune.com/2016/09/10/facebook-pulls-911-topic/

Messaging Bots May Soon Invade Your Inbox

If messaging software startups like Slack and Kore have their way, bots will become far more common in the not-so-distant future for organizing and prioritizing workplace communications. Only the messages won’t just come from other humans–they’ll be generated by software applications and Internet-connected machines that need to share important updates. Chat “bots,” simple apps programmed…

http://fortune.com/2016/03/29/messaging-bots-slack-kore/

Facebook Wants to Be Cooler Than the Dictionary

It’s hard to keep up when you don’t understand the latest terms. But Facebook is developing a new technology that could learn new slang and buzzwords from the coolest of the cool kids: Facebook users. The company was just awarded a patent for developing a “social glossary.” The patent was granted in February, as Business…

http://fortune.com/2016/03/09/facebook-social-glossary-patent/