SENSING
THE FUTURE IBM
Computers
will gain the five senses within the next five years, according to experts at
IBM, changing the way we use our gadgets
TOUCH
Imagine using your smartphone to shop for your wedding dress and being able to feel the satin or silk of the gown, or the lace on the veil, all from the surface of the screen? Or to feel the beading and weave of a blanket made by a local artisan half way around the world?
In five years, industries such as retail will be transformed by the ability to “touch” a product through your mobile device.
Scientists are developing several applications for the retail, healthcare and other sectors using haptic, infrared and pressure sensitive technologies to simulate touch, such as the texture and weave of a fabric – as a shopper brushes her finger over the image of the item on a device screen.
Utilising the vibration capabilities of the phone, every object will have a unique set of vibration patterns that represents the touch experience: short fast patterns, or longer and stronger strings of vibrations.
The vibration pattern will differentiate silk from linen or cotton, helping simulate the physical sensation of actually touching the material.
Current uses of haptic and graphic tech in the gaming industry take the end user into a simulated environment.
The opportunity and challenge here is to make the technology so ubiquitous and inter-woven into everyday experiences that it brings greater context to our lives by weaving technology in front and around us.
This tech will become ubiquitous in our everyday lives, turning mobile phones into tools for natural and intuitive interaction with the world around us.
Imagine using your smartphone to shop for your wedding dress and being able to feel the satin or silk of the gown, or the lace on the veil, all from the surface of the screen? Or to feel the beading and weave of a blanket made by a local artisan half way around the world?
In five years, industries such as retail will be transformed by the ability to “touch” a product through your mobile device.
Scientists are developing several applications for the retail, healthcare and other sectors using haptic, infrared and pressure sensitive technologies to simulate touch, such as the texture and weave of a fabric – as a shopper brushes her finger over the image of the item on a device screen.
Utilising the vibration capabilities of the phone, every object will have a unique set of vibration patterns that represents the touch experience: short fast patterns, or longer and stronger strings of vibrations.
The vibration pattern will differentiate silk from linen or cotton, helping simulate the physical sensation of actually touching the material.
Current uses of haptic and graphic tech in the gaming industry take the end user into a simulated environment.
The opportunity and challenge here is to make the technology so ubiquitous and inter-woven into everyday experiences that it brings greater context to our lives by weaving technology in front and around us.
This tech will become ubiquitous in our everyday lives, turning mobile phones into tools for natural and intuitive interaction with the world around us.
HEARING
Ever wish you could make sense of the sounds all around you and be able to understand what’s not being said? Within five years, a distributed system of clever sensors will detect elements of sound such as sound pressure, vibrations and sound waves at different frequencies. It will interpret these inputs to predict when trees will fall in a forest or when a landslide is imminent. Such a system will “listen” to our surroundings and measure movements, or the stress in a material, to warn us if danger lies ahead.
Raw sounds will be detected by sensors, much like the human brain. A system that receives this data will take into account other “modalities,” such as visual or tactile data, and classify and interpret the sounds based on what it has learned.
When new sounds are detected, the system will form conclusions based on previous knowledge and by recognising patterns.
For example, “baby talk” will be understood as a language, telling parents or doctors what infants are trying to communicate.
Sounds can be a trigger for interpreting a baby’s behavior or needs. By being taught what baby sounds mean – whether fussing indicates a baby is hungry, hot, tired or in pain – a sophisticated speech recognition system would correlate sounds and babbles with other sensory or physiological data such as heart rate, pulse and temperature.
In the future, by learning about emotion and being able to sense mood, systems will pinpoint aspects of a conversation and analyse pitch, tone and hesitancy to help us have more productive dialogues that could improve customer call center interactions, or allow us to seamlessly interact with different cultures. Visit the Web link: http://bit.ly/V1mABW
Ever wish you could make sense of the sounds all around you and be able to understand what’s not being said? Within five years, a distributed system of clever sensors will detect elements of sound such as sound pressure, vibrations and sound waves at different frequencies. It will interpret these inputs to predict when trees will fall in a forest or when a landslide is imminent. Such a system will “listen” to our surroundings and measure movements, or the stress in a material, to warn us if danger lies ahead.
Raw sounds will be detected by sensors, much like the human brain. A system that receives this data will take into account other “modalities,” such as visual or tactile data, and classify and interpret the sounds based on what it has learned.
When new sounds are detected, the system will form conclusions based on previous knowledge and by recognising patterns.
For example, “baby talk” will be understood as a language, telling parents or doctors what infants are trying to communicate.
Sounds can be a trigger for interpreting a baby’s behavior or needs. By being taught what baby sounds mean – whether fussing indicates a baby is hungry, hot, tired or in pain – a sophisticated speech recognition system would correlate sounds and babbles with other sensory or physiological data such as heart rate, pulse and temperature.
In the future, by learning about emotion and being able to sense mood, systems will pinpoint aspects of a conversation and analyse pitch, tone and hesitancy to help us have more productive dialogues that could improve customer call center interactions, or allow us to seamlessly interact with different cultures. Visit the Web link: http://bit.ly/V1mABW
SMELL
During the next five years, tiny sensors embedded in your computer or phone will detect if you’re coming down with a cold or other illness. By analysing odours, biomarkers and thousands of molecules in someone’s breath, doctors will have help diagnosing the onset of ailments such as liver and kidney disorders, asthma, diabetes and epilepsy by detecting which odours are normal and which are not.
Scientists are already sensing environmental conditions and gases to preserve works of art. This innovation is beginning to be applied to tackle clinical hygiene. For example, antibiotic-resistant bacteria such as Methicillin-resistant Staphylococcus aureus (MRSA), is commonly found on the skin and can be easily transmitted wherever people are in close contact. One way of fighting MRSA exposure in healthcare institutions is by ensuring medical staff follow clinical hygiene guidelines. In the future, technology will “smell” surfaces for disinfectants to determine whether rooms have been sanitised. Using wireless networks, data on various chemicals will be gathered and measured by sensors, and continuously learn and adapt to new smells over time.
Advanced sensor and communication tech in combination with deep learning systems, can measure data in places never thought possible. For example, computers can be used in agriculture to “smell” or analyse the soil condition. In urban areas, this tech will be used to monitor issues with refuge, sanitation and pollution – helping city agencies spot potential problems before they get out of hand. Visit the Web link: http://bit.ly/V4F5da
During the next five years, tiny sensors embedded in your computer or phone will detect if you’re coming down with a cold or other illness. By analysing odours, biomarkers and thousands of molecules in someone’s breath, doctors will have help diagnosing the onset of ailments such as liver and kidney disorders, asthma, diabetes and epilepsy by detecting which odours are normal and which are not.
Scientists are already sensing environmental conditions and gases to preserve works of art. This innovation is beginning to be applied to tackle clinical hygiene. For example, antibiotic-resistant bacteria such as Methicillin-resistant Staphylococcus aureus (MRSA), is commonly found on the skin and can be easily transmitted wherever people are in close contact. One way of fighting MRSA exposure in healthcare institutions is by ensuring medical staff follow clinical hygiene guidelines. In the future, technology will “smell” surfaces for disinfectants to determine whether rooms have been sanitised. Using wireless networks, data on various chemicals will be gathered and measured by sensors, and continuously learn and adapt to new smells over time.
Advanced sensor and communication tech in combination with deep learning systems, can measure data in places never thought possible. For example, computers can be used in agriculture to “smell” or analyse the soil condition. In urban areas, this tech will be used to monitor issues with refuge, sanitation and pollution – helping city agencies spot potential problems before they get out of hand. Visit the Web link: http://bit.ly/V4F5da
TASTE
What if we could make healthy foods taste delicious by using a completely different kind of computing system that is built for creativity?
Researchers are developing a computing system that actually experiences flavour, to be used with chefs to create the most tasty and novel recipes.
It will break down ingredients to their molecular level and blend the chemistry of food compounds with the psychology behind what flavours and smells humans prefer.
By comparing this with millions of recipes, the system will be able to create new flavour combinations that pair, for example, roasted chestnuts with other foods such as cooked beetroot, fresh caviar, and dry-cured ham.
A system like this can also be used to help us eat healthier, creating novel flavour combinations that will make us crave a vegetable casserole instead of potato chips.
The computer will be able to use algorithms to determine the precise chemical structure of food and why people like certain tastes.
These algorithms will examine how various chemicals in the food interact with each other, the molecular complexity of flavour compounds and their bonding structure, and use that information, together with models of perception to predict the taste appeal of flavours.
Not only will it make healthy foods more palatable – it will also surprise us with unusual pairings of unlikely foods that are actually designed to maximise our total experience of taste and flavour.
In the case of people with special dietary needs such as individuals with diabetes or heart disease, it would develop flavours and recipes to keep their blood sugar and fat content regulated, but also manage to satisfy their sweet tooth.
Visit the Web link: http://bit.ly/XDxMab
What if we could make healthy foods taste delicious by using a completely different kind of computing system that is built for creativity?
Researchers are developing a computing system that actually experiences flavour, to be used with chefs to create the most tasty and novel recipes.
It will break down ingredients to their molecular level and blend the chemistry of food compounds with the psychology behind what flavours and smells humans prefer.
By comparing this with millions of recipes, the system will be able to create new flavour combinations that pair, for example, roasted chestnuts with other foods such as cooked beetroot, fresh caviar, and dry-cured ham.
A system like this can also be used to help us eat healthier, creating novel flavour combinations that will make us crave a vegetable casserole instead of potato chips.
The computer will be able to use algorithms to determine the precise chemical structure of food and why people like certain tastes.
These algorithms will examine how various chemicals in the food interact with each other, the molecular complexity of flavour compounds and their bonding structure, and use that information, together with models of perception to predict the taste appeal of flavours.
Not only will it make healthy foods more palatable – it will also surprise us with unusual pairings of unlikely foods that are actually designed to maximise our total experience of taste and flavour.
In the case of people with special dietary needs such as individuals with diabetes or heart disease, it would develop flavours and recipes to keep their blood sugar and fat content regulated, but also manage to satisfy their sweet tooth.
Visit the Web link: http://bit.ly/XDxMab
SIGHT
We take an estimated 500 billion photos a year and upload more than 72 hours of video on YouTube every minute.
However, computers today can only understand the pictures by the text we use to tag or title them; the majority of the information – the actual content of the image – is a still a mystery to artificial intelligence.
In the next five years, systems will not only be able to look at and recognise the contents of images and visual data, they will also turn the pixels into meaning, beginning to make sense out of it in a way that is similar to the way a human views and interprets a photograph or a painting.
In the future, “brain-like” capabilities will let computers analyse features such as colour, texture patterns or edge information and extract insights from visual media.
This will have a profound impact for industries such as healthcare, retail and agriculture.
Within five years, these capabilities will be put to work in healthcare by making sense out of massive volumes of medical information such as Magnetic resonance images (MRIs), Computed tomography (CT) scans, X-Rays and ultrasound images to capture all the information that is tailored to particular anatomy or pathologies.
What is critical in these images could be very subtle or even invisible to the human eye and would require careful measurement.
But by being training to discriminate what to look for in images – such as differentiating healthy from diseased tissue – and by correlating that information with patient records and the latest scientific literature the system will be able to see problems.
These systems that can “see” will help the doctors detect medical problems with far greater speed and accuracy than is physical possible with today’s technology.
Visit the Web link: http://bit.ly/12xoXDd
We take an estimated 500 billion photos a year and upload more than 72 hours of video on YouTube every minute.
However, computers today can only understand the pictures by the text we use to tag or title them; the majority of the information – the actual content of the image – is a still a mystery to artificial intelligence.
In the next five years, systems will not only be able to look at and recognise the contents of images and visual data, they will also turn the pixels into meaning, beginning to make sense out of it in a way that is similar to the way a human views and interprets a photograph or a painting.
In the future, “brain-like” capabilities will let computers analyse features such as colour, texture patterns or edge information and extract insights from visual media.
This will have a profound impact for industries such as healthcare, retail and agriculture.
Within five years, these capabilities will be put to work in healthcare by making sense out of massive volumes of medical information such as Magnetic resonance images (MRIs), Computed tomography (CT) scans, X-Rays and ultrasound images to capture all the information that is tailored to particular anatomy or pathologies.
What is critical in these images could be very subtle or even invisible to the human eye and would require careful measurement.
But by being training to discriminate what to look for in images – such as differentiating healthy from diseased tissue – and by correlating that information with patient records and the latest scientific literature the system will be able to see problems.
These systems that can “see” will help the doctors detect medical problems with far greater speed and accuracy than is physical possible with today’s technology.
Visit the Web link: http://bit.ly/12xoXDd
MM121219
No comments:
Post a Comment