Showing posts with label Advance Technology. Show all posts
Showing posts with label Advance Technology. Show all posts

Monday, 9 October 2017

New software turns mobile-phone accessory into breathing monitor



The Optical Society ( OSA)


Researchers have developed new software that makes it possible to use low-cost, thermal cameras attached to mobile phones to track how fast a person is breathing. This type of mobile thermal imaging could be used for monitoring breathing problems in elderly people living alone, people suspected of having sleep apnea or babies at risk for sudden infant death syndrome (SIDS).





In The Optical Society (OSA) journal Biomedical Optics Express, the researchers report that their new software combined with a low-cost thermal camera performed well when analyzing breathing rate during tests simulating real-world movement and temperature changes.

"As thermal cameras continue to get smaller and less expensive, we expect that phones, computers and augmented reality devices will one day incorporate thermal cameras that can be used for various applications," said Nadia Bianchi-Berthouze from University College London, (UK) and leader of the research team. "By using low-cost thermal cameras, our work is a first step toward bringing thermal imaging into people's everyday lives. This approach can be used in places other sensors might not work or would cause concern."

In addition to detecting breathing problems, the new approach could one day allow the camera on your computer to detect subtle breathing irregularities associated with pain or stress and then send prompts that help you relax and regulate breathing. Although traditional video cameras can be used to track breathing, they don't work well in low-light situations and can cause privacy concerns when used for monitoring in nursing homes, for example.

"Thermal cameras can detect breathing at night and during the day without requiring the person to wear any type of sensor," said Youngjun Cho, first author of the paper. "Compared to a traditional video camera, a thermal camera is more private because it is more difficult to identify the person."

Personal thermal cameras

Thermal cameras, which use infrared wavelengths to reveal the temperature of an object or scene, have been used in a variety of monitoring applications for some time. Recently, their price and size have dropped enough to make them practical for personal use, with small thermal cameras that connect to mobile phones now available for around $200.

"Large, expensive thermal imaging systems have been used to measure breathing by monitoring temperature changes inside the nostrils under controlled settings," said Cho. "We wanted to use the new portable systems to do the same thing by creating a smart-phone based respiratory tracking method that could be used in almost any environment or activity. However, we found that in real-world situations this type of mobile thermal imaging was affected by changes in air temperature and body movement."

To solve these problems, the researchers developed algorithms that can be used with any thermal camera to compensate for ambient temperature changes and accurately track the nostrils while the person is moving. In addition, the new algorithms improve the way breathing signals are processed. Instead of averaging the temperature readings from 2D pixels around the nostrils, as has been done in the past, Cho developed a way to treat the area as a 3D surface to create a more refined measurement of temperature in the nostrils.

Testing in real-world situations

In addition to indoor laboratory tests, the researchers used the mobile thermal imaging approach to measure the breathing of volunteers in a scenario that involved breathing exercises with changes in ambient temperature and in a fully unconstrained test where volunteers walked around inside and outside of a building. During the walking tests, the thermal camera was placed between 20 and 30 centimeters from a person's face using a rig that attached the camera to a hat. A cord then connected the camera with a mobile phone carried by study volunteers. It is also possible to hold a smartphone with an imaging camera about 50 centimeters from the face to measure breathing.

"For all three types of studies, the algorithms showed significantly better performance in tracking the nostril area than other state-of-the-art methods," said Cho. "In terms of estimating the breathing rate, the tests outside the laboratory showed the best results when compared with the latest algorithms. Although the results were comparable to the traditional breathing belt sensor, for mobile situations our approach seems to be more stable because the belt tends to get loose."

Because the new approach is more stable than standard chest belt respiratory sensors, the method could potentially be used to optimize an athlete's performance by providing more reliable and accurate feedback on breathing patterns during exercise.

The researchers took their work one step further by inferring a person's mental load or stress through automatic breathing analysis. They used their thermal imaging software to track the breathing of people who were free to move around while performing various types of tasks, and the results aligned well with findings from studies that used much more sophisticated equipment, indicating the portable thermal-camera based approach could be a useful tool for apps that help people relax.

"By using mobile thermal imaging to monitor only breathing, we obtained results very comparable to what other studies had found," said Bianchi-Berthouze. "However, those studies used complex, state-of-the-art techniques that involved multiple sensors monitoring not just breathing but also heart rate."

The current version of the software doesn't estimate the breathing rate in real time, but the researchers are working to incorporate this capability and to test their algorithms in more real-life situations.



Sunday, 7 May 2017

YouTube Is In a Race With Facebook, Netflix, and Amazon



Over TV’s Future

In one of the biggest changes in digital media, companies are racing to reinvent television so they can get access to the billions of dollars in TV advertising. And Google has made it clear that YouTube intends to be a major factor in that race.





YouTube has made a number of changes aimed at beefing up the professional side of its video lineup including the launch of a subscription service called YouTube Red. On Thursday, it unveiled another major step—one that will bring it into even more direct conflict with Facebook and Amazon, both of which also have their sights set on dominating the future of TV.

YouTube announced at an event for advertisers in New York that it is premiering 40 new TV-style shows, many of which will feature not the usual homegrown talents that live on YouTube, but actual celebrities from traditional TV and Hollywood movies.




The initial slate of seven shows will include unscripted material from actors, talk-show hosts, musicians, and comedians such as Ellen DeGeneres, Katy Perry, Demi Lovato, Ludacris, and Kevin Hart. Other shows will be fronted by YouTube stars including The Slow-Mo Guys, who specialize in filming things that are being blown up at super-slow-motion speeds.

Get Data Sheet, Fortune’s technology newsletter.

YouTube CEO Susan Wojcicki made a point of saying the new shows—all of which will be supported by advertising, rather than a subscription paywall like YouTube Red—shouldn't be taken as a sign that the service is turning its back on its user-generated past.



WARREN BUFFETT

Warren Buffett Says Artificial Intelligence Will ‘Hurt’ Berkshire Hathaway’s Business
"YouTube is not TV and we never will be," Wojcicki said. "The platform that you all helped create represents something bigger." But it's clear that YouTube intends to be much more than just a platform for unknowns to make their mark with wisecracks or ad-hoc skits. It very much wants to be a conduit for more traditional fare as well.

The introduction of this new stable of shows means that YouTube is effectively taking three different routes towards getting more serious about TV, including Red—which carries content from YouTube stars such as PewDiePie and the Fine brothers—and YouTube TV, which debuted earlier this year and offers a cable-style package of traditional channels such as ESPN, NBC, and Fox.

Facebook has also made a number of moves towards getting more serious about TV, including steps that appear to be moving it away from the short-form, user-generated content that is popular on Facebook Live, and more toward longer-form, more traditional fare.

Last year, the giant social network hired Ricky Van Veen, one of the co-founders of the video site CollegeHumor, and assigned him to license or fund the creation of what sounds a lot like TV-style entertainment content, including comedy shows. Facebook is also said to be hiring a Hollywood producer.




Some of this is going to bring both YouTube and Facebook into conflict with Netflix, which has been spending billions to license TV shows, movies and other content such as comedy shows and reality TV.

According to some industry watchers, the price of this kind of entertainment has been climbing because Netflix has such deep pockets and is willing to pay.

Friday, 5 May 2017

Supercomputers assist in search for new, better cancer drugs



Better Cancer Drugs


Researchers use advanced computers to virtually discover and experimentally test new chemotherapy drugs and targets


Finding new drugs that can more effectively kill cancer cells or disrupt the growth of tumors is one way to improve survival rates for ailing patients. Researchers are using supercomputers to find new chemotherapy drugs and to test known compounds to determine if they can fight different types of cancer. Recent efforts have yielded promising drug candidates, potential plant-derived compounds and insights into how to design more effective drugs.


The model of full-length p53 protein bound to DNA as a tetramer. The surface of each p53 monomer is depicted with a different color.





Surgery and radiation remove, kill, or damage cancer cells in a certain area. But chemotherapy -- which uses medicines or drugs to treat cancer -- can work throughout the whole body, killing cancer cells that have spread far from the original tumor.
Finding new drugs that can more effectively kill cancer cells or disrupt the growth of tumors is one way to improve survival rates for ailing patients.
Increasingly, researchers looking to uncover and test new drugs use powerful supercomputers like those developed and deployed by the Texas Advanced Computing Center (TACC).
"Advanced computing is a cornerstone of drug design and the theoretical testing of drugs," said Matt Vaughn, TACC's Director of Life Science Computing. "The sheer number of potential combinations that can be screened in parallel before you ever go in the laboratory makes resources like those at TACC invaluable for cancer research."
Three projects powered by TACC supercomputer, which use virtual screening, molecular modeling and evolutionary analyses, respectively, to explore chemotherapeutic compounds, exemplify the type of cancer research advanced computing enables.

Virtual Screening:-

Shuxing Zhang, a researcher in the Department of Experimental Therapeutics at the University of Texas MD Anderson Cancer Center, leads a lab dedicated to computer-assisted rational drug design and discovery of novel targeted therapeutic agents.
The group develops new computational methods, using artificial intelligence and high-performance computing-based virtual screening strategies, that help the entire field of cancer drug discovery and development.
Identifying a new drug by intuition or trial and error is expensive and time consuming. Virtual screening, on the other hand, uses computer simulations to explore how a large number of small molecule compounds "dock," or bind, to a target to determine if they may be candidates for future drugs.
"In silico virtual screening is an invaluable tool in the early stages of drug discovery," said Joe Allen, a research associate at TACC. "It paints a clear picture not only of what types of molecules may bind to a receptor, but also what types of molecules would not bind, saving a lot of time in the lab."
One specific biological target that Zhang's group investigates is called TNIK (TRAF2- and NCK-interacting kinase). TNIK is an enzyme that plays a key role in cell signaling related to colon cancer. Silencing TNIK, it is believed, may suppress the proliferation of colorectal cancer cells.
Writing in Scientific Reports in September 2016, Zhang and his collaborators reported the results of a study that investigated known compounds with desirable properties that might act as TNIK inhibitors.
Using the Lonestar supercomputer at TACC, they screened 1,448 Food and Drug Administration-approved small molecule drugs to determine which had the molecular features needed to bind and inhibit TNIK.
They discovered that one -- mebendazole, an approved drug that fights parasites -- could effectively bind to the target. After testing it experimentally, they further found that the drug could also selectively inhibit TNIK's enzymatic activity.
As an FDA-approved drug that can be used at higher dosages without severe side effects, mebendazole may is a strong candidate for further exploration and may even exhibit a 'synergic anti-tumor effect' when used with other anti-cancer drugs.
"Such advantages render the possibility of quickly translating the discovery into a clinical setting for cancer treatment in the near future," Zhang and his collaborators wrote.
In separate research published in Cell in 2013, Zhang's group used Lonestar to virtually screen an even greater number of novel inhibitors of Skp2, a critical oncogene that controls the cell cycle and is frequently observed as being overexpressed in human cancer.
"Molecular docking is a computationally-expensive process and the screening of 3 million drug-like compounds needs more than 2,000 days on a single CPU [computer processing unit]," Zhang said. "By running the process on a high-performance computing cluster, we were able to screen millions of compounds within days instead of years."
Their computational approaches identified a specific Skp2 inhibitor that can selectively impair Skp2 activity and functions, thereby exhibiting potent anti-tumor activity.
"Our work at TACC has resulted in multiple potential drug candidates currently at the different stages of preclinical and clinical studies," said Zhang. "We hope to continue using the resources to identify more effective and less toxic therapeutics."

Molecular Modeling:-

Described as "the guardian of the genome," tumor protein 53 (p53) plays a crucial role in multicellular organisms, conserving the stability of DNA by preventing mutations and thereby acting as a tumor suppressor.
However, in approximately 50 percent of all human cancers, p53 is mutated and rendered inactive. Therefore, reactivation of mutant p53 using small molecules has been a long-sought-after anticancer therapeutic strategy.
Rommie Amaro, professor of Chemistry and Biochemistry at the University of California, San Diego has been studying this important molecule for years trying to understand how it works.
In September 2016, writing in the journal Oncogene, she reported results from the largest atomic-level simulation of the tumor suppression protein to date -- comprising more than 1.5 million atoms.
The simulations helped to identify new "pockets" -- binding sites on the surface of the protein -- where it may be possible to insert a small molecule that could reactivate p53. They revealed a level of complexity that is very difficult, if not impossible, to experimentally test.
"We could see how when the full-length p53 was bound to a DNA sequence that was a recognition sequence, the tetramer clamps down and grips onto the DNA -- which was unexpected," Amaro said.
In contrast, with the negative control DNA, p53 stays more open. "It actually relaxes and loosens its grip on the DNA," she said. "It suggested a mechanism by which this molecule could actually change its dynamics depending on the exact sequence of DNA."
According to Amaro, computing provides a better understanding of cancer mechanisms and ways to develop possible novel therapeutic avenues.
"When most people think about cancer research they probably don't think about computers, but biophysical models are getting to the point where they have a great impact on the science," she said.

EVOLUTIONARY COMPARISONS:-

Chemicals created by plants are the basis for the majority of the medicines used today. One such plant, the periwinkle (Catharanthus roseus), is used in chemotherapy protocols for leukemia and Hodgkin's lymphoma.
A completely different approach to drug discovery involves studying the evolution of plants that are known to be effective chemotherapeutic agents and their genetic relatives, since plants that share an evolutionary history often share related collections of chemical compounds.
University of Texas researchers -- working with researchers from King Abdulaziz University in Saudi Arabia, the University of Ottawa and Université de Montréal -- have been studying Rhazya stricta, an environmentally stressed, poisonous evergreen shrub found in Saudi Arabia that is a member of the family that includes the periwinkle.
To understand the genome and evolutionary history of Rhayza stricta, the researchers performed genome assemblies and analyses on TACC's Lonestar, Stampede and Wrangler systems. According Robert Jansen, professor of Integrative Biology at UT and lead researcher on the project, the computational resources at TACC were essential for constructing and studying the plant's genome.
The results were published in Scientific Reports in September 2016.
"These analyses allowed the identification of genes involved in the monoterpene indole alkaloid pathway, and in some cases expansions of gene families were detected," he said.
The monoterpene indole alkaloid pathway produces compounds that have known therapeutic properties against cancer.

From the annotated Rhazya genome, the researchers developed a metabolic pathway database, RhaCyc, that can serve as a community resource and help identify new chemotherapeutic molecules.

Jansen and his team hope that by better characterizing the genome and evolutionary history using advanced computational methods, and making the metabolic pathway database available as a community resource, they can speed the development of new medicines in the future.


"There are a nearly infinite number of possible drug compounds," Vaughn said. "But knowing the principles of what a good drug might look like -- how it might bind to a certain pocket or what it might need to resemble -- helps narrow the scope immensely, accelerating discoveries, while reducing costs."

Computers learn to understand humans better by modelling them


Computers Understand Humans

Despite significant breakthroughs in artificial intelligence, it has been notoriously hard for computers to understand why a user behaves the way s/he does. Now researchers report that computers are able to learn to explain the behavior of individuals by tracking their glances and movements.

Computers are able to learn to explain the behavior of individuals by tracking their glances and movements.

The picture shows how ABC-driven parameters lead to more accurate predictions of user behavior.





Researchers from Aalto University, University of Birmingham and University of Oslo present results paving the way for computers to learn psychologically plausible models of individuals simply by observing them. In newly published conference article, the researchers showed that just by observing how long a user takes to click menu items, one can infer a model that reproduces similar behavior and accurately estimates some characteristics of that user's visual system, such as fixation durations.


Despite significant breakthroughs in artificial intelligence, it has been notoriously hard for computers to understand why a user behaves the way she does. Cognitive models that describe individual capabilities, as well as goals, can much better explain and hence be able to predict individual behavior also in new circumstances. However, learning these models from the practically available indirect data has been out of reach.

"The benefit of our approach is that much smaller amount of data is needed than for 'black box' methods. Previous methods for performing this type of tuning have either required extensive manual labor, or a large amount of very accurate observation data, which has limited the applicability of these models until now," Doctoral student Antti Kangasrääsiö from Aalto University explains.


The method is based on Approximate Bayesian Computation (ABC), which is a machine learning method that has been developed to infer very complex models from observations, with uses in climate sciences and epidemiology among others. It paves the way for automatic inference of complex models of human behavior from naturalistic observations. This could be useful in human-robot interaction, or in assessing individual capabilities automatically, for example detecting symptoms of cognitive decline.



"We will be able to infer a model of a person that also simulates how that person learns to act in totally new circumstances," Professor of Machine Learning at Aalto University Samuel Kaski says.


"We're excited about the prospects of this work in the field of intelligent user interfaces," Antti Oulasvirta Professor of User Interfaces from Aalto University says.


"In the future, the computer will be able to understand humans in a somewhat similar manner as humans understand each other. It can then much better predict not only the benefits of a potential change but also its individual costs to an individual, a capability that adaptive interfaces have lacked," he continues.


The results will be presented at the world's largest computer-human interaction conference CHI in Denver, USA, in May 2017.

Tuesday, 18 April 2017

You think you can secure your mobile phone with a fingerprint


Similarities in partial fingerprints may be sufficient to trick biometric security systems on smartphones


Smartphones typically capture a limited portion of the full fingerprint using small sensors. Multiple partial fingerprints are captured for the same finger during enrollment. The figure shows a set of partial fingerprints (b) extracted from the full fingerprint (a).




No two people are believed to have identical fingerprints, but researchers at the New York University Tandon School of Engineering and Michigan State University College of Engineering have found that partial similarities between prints are common enough that the fingerprint-based security systems used in mobile phones and other electronic devices can be more vulnerable than previously thought.

The vulnerability lies in the fact that fingerprint-based authentication systems feature small sensors that do not capture a user's full fingerprint. Instead, they scan and store partial fingerprints, and many phones allow users to enroll several different fingers in their authentication system. Identity is confirmed when a user's fingerprint matches any one of the saved partial prints. The researchers hypothesized that there could be enough similarities among different people's partial prints that one could create a "MasterPrint."

Nasir Memon, a professor of computer science and engineering at NYU Tandon and the research team leader, explained that the MasterPrint concept bears some similarity to a hacker who attempts to crack a PIN-based system using a commonly adopted password such as 1234. "About 4 percent of the time, the password 1234 will be correct, which is a relatively high probability when you're just guessing," said Memon. The research team set out to see if they could find a MasterPrint that could reveal a similar level of vulnerability. Indeed, they found that certain attributes in human fingerprint patterns were common enough to raise security concerns.

Memon and his colleagues, NYU Tandon Postdoctoral Fellow Aditi Roy and Michigan State University Professor of Computer Science and Engineering Arun Ross, undertook their analysis using 8,200 partial fingerprints. Using commercial fingerprint verification software, they found an average of 92 potential MasterPrints for every randomly sampled batch of 800 partial prints. (They defined a MasterPrint as one that matches at least 4 percent of the other prints in the randomly sampled batch.)

They found, however, just one full-fingerprint MasterPrint in a sample of 800 full prints. "Not surprisingly, there's a much greater chance of falsely matching a partial print than a full one, and most devices rely only on partials for identification," said Memon.

The team analyzed the attributes of MasterPrints culled from real fingerprint images, and then built an algorithm for creating synthetic partial MasterPrints. Experiments showed that synthetic partial prints have an even wider matching potential, making them more likely to fool biometric security systems than real partial fingerprints. With their digitally simulated MasterPrints, the team reported successfully matching between 26 and 65 percent of users, depending on how many partial fingerprint impressions were stored for each user and assuming a maximum number of five attempts per authentication. The more partial fingerprints a given smartphone stores for each user, the more vulnerable it is.

Roy emphasized that their work was done in a simulated environment. She noted, however, that improvements in creating synthetic prints and techniques for transferring digital MasterPrints to physical artifacts in order to spoof a device pose significant security concerns. The high matching capability of MasterPrints points to the challenges of designing trustworthy fingerprint-based authentication systems and reinforces the need for multi-factor authentication schemes. She said this work may inform future designs.

"As fingerprint sensors become smaller in size, it is imperative for the resolution of the sensors to be significantly improved in order for them to capture additional fingerprint features," Ross said. "If resolution is not improved, the distinctiveness of a user's fingerprint will be inevitably compromised. The empirical analysis conducted in this research clearly substantiates this."

Memon noted that the results of the team's research are based on minutiae-based matching, which any particular vendor may or may not use. Nevertheless, as long as partial fingerprints are used for unlocking devices and multiple partial impressions per finger are stored, the probability of finding MasterPrints increases significantly, he said.

"NSF's investments in cybersecurity research build the foundational knowledge base needed to protect us in cyberspace," said Nina Amla, program director in the Division of Computing and Communication Foundations at the National Science Foundation. "Much as other NSF-funded research has helped identify vulnerabilities in everyday technologies, such as cars or medical devices, investigating the vulnerabilities of fingerprint-based authentication systems informs continuous advancements in security, ensuring more reliable protection for users."


Source - NYU Tandon School of Engineering

Thursday, 6 April 2017

Software-based System can determine the cause of ischemic stroke



Software-based system improves the ability to determine the cause of ischemic stroke


Determining the cause of an ischemic stroke -- one caused by an interruption of blood supply -- is critical to preventing a second stroke and is a primary focus in the evaluation of stroke patients. But despite that importance, physicians have long lacked a robust and objective means of doing so. Now a team of investigators at the Athinoula A. Martinos Center for Biomedical Imaging at the Massachusetts General Hospital (MGH) and the MGH Stroke Service have developed a software package that provides evidence-based, automated support for diagnosing the cause of stroke. Their study validating the package -- called Causative Classification of Stroke (CCS) -- was published online in JAMA Neurology.







"This was a much-needed study because, although stroke classifications systems are often used in research and clinical practice, these systems are not always able to produce subtypes with discrete pathophysiological, diagnostic and prognostic characteristics," says Hakan Ay, MD, a vascular neurologist, Martinos Center investigator and senior author of the JAMA Neurology paper. "We found that the CCS-based classifications provided better correlations between clinical and imaging stroke features and were better able to discriminate among stroke outcomes than were two conventional, non-automated classification methods."


There are more than 150 different possible causes -- or etiologies -- of ischemic stroke, and approximately half of patients exhibit features suggesting more than one possible cause. This leads to considerable complexity in determining the cause of a stroke and, in roughly one of two patients, can lead to disagreements among physicians about the cause. The CCS software helps to reduce this complexity by exploiting classification criteria that are well defined, replicable and based on evidence rather than subjective assessment.


The CCS software does this in several ways. First, it weights the possible etiologies by considering the relative potential of each to cause a stroke. Second, in the presence of multiple potential causes it incorporates the clinical and imaging features that make one mechanism more probable than others for an individual patient. Third, it determines the likelihood of that cause by taking into account the number of diagnostic tests that were performed. And finally, it ensures that data is entered in a consistent manner. The software can also serve as an important research tool, by providing investigators with both the ability to examine how stroke etiologies interact with one another and the flexibility to define new etiology subtypes according to the needs of the individual research project.
The MGH team previously showed that use of the CCS algorithm reduced the disagreement rate among physicians from 50 percent to approximately 20 percent. The recently published JAMA Neurology study further established the utility of the algorithm by demonstrating its ability to generate categories of etiologies with different clinical, imaging and prognostic characteristics for 1,816 ischemic stroke patients enrolled in two previous MGH-based studies. Based on patient data, CCS was able to assign etiologies to 20 to 40 percent of the patients for which two other systems were unable to determine a cause. It also was better at determining the likelihood of second stroke within 90 days.


"The validity data that have emerged from the current study add to the utility of the software-based approach and highlight once again that careful identification and accurate classification of the underlying etiology is paramount for every patient with stroke," says Ay, who is an associate professor of Radiology at Harvard Medical School. "The information the software provides not only is critical for effective stroke prevention but also could increase the chances for new discoveries by enhancing the statistical power in future studies of etiologic stroke subtypes. We estimate that, compared to conventional systems, the use of CCS in stroke prevention trials testing targeted treatments for a particular etiologic subtype could reduce the required sample size by as much as 30 percent."


The MGH-licensed CCS is available at https://ccs.mgh.harvard.edu/ and is free for academic use. The software was designed to be a "living algorithm" and can accommodate new information as it emerges. New etiology-specific biomarkers, genetic markers, imaging markers and clinical features that become available can be incorporated into the existing CCS algorithm to further enhance its ability to determine the underlying causes of stroke.

Saturday, 18 March 2017

Early warning system for mass cyber attacks


Mass cyber attacks


Mass attacks from the Internet are a common fear: Millions of requests in a short time span overload online services, grinding them to a standstill for hours and bringing Internet companies to their knees. The operators of the site under attack can often only react by redirecting the wave of requests, or by countering it with an exceptionally powerful server. This has to happen very quickly, however.

Researchers from the Competence Center for IT Security, CISPA, at the Saarland University have developed a kind of early warning system for this purpose. Details and first results will be presented by the scientists at the computer fair Cebit in Hannover.
These mass cyber attacks, known as "Distributed Denial of Service" (DDoS) attacks, are considered to be one of the scourges of the Internet. Because they are relatively easy to conduct, they are used by teenagers for digital power games, by criminals as a service for the cyber mafia, or by governments as a digital weapon. According to the software enterprise Kaspersky, some 80 countries were affected in the last quarter of 2016 alone, and counting. Last October, for example, several major online platforms such as Twitter, Netflix, Reddit and Spotify were unavailable to Internet users in North America, Germany, and Japan for several hours. A new type of DDoS attack, a so-called amplification attack, was found to be the source of these outages.
"What makes this so insidious is that the attackers achieve maximum damage with very little effort," says Christian Rossow, professor for IT security at the Saarland University, and head of the System Security Group at the local IT Security Competence Center, CISPA. Remote-controlled computers are used to direct requests at vulnerable systems in such a way that the system's responses far exceed the number of requests. The request addresses are then replaced by the Internet address of the victim. Rossow has identified 14 different Internet protocols that can be exploited for this kind of attack.
To investigate these malicious attacks, and the people and motives behind them more closely, Rossow has developed a special kind of digital bait for distributed attacks (also known as honeypots), in collaboration with the CISPA researchers Lukas Kraemer and Johannes Krupp and with colleagues from Japan. 21 of these honeypot traps were laid out in the more obscure corners of the Internet, enabling the researchers to document more than 1.5 million attacks. In this manner, he could identify the different phases of attacks which helped develop an early warning system from the data. He additionally attached secret digital markers to the attack codes he discovered in the digital wilderness, and was thus able to trace the source of the attacks. "This is quite impressive, because these address counterfeiters usually remain hidden by default," says Rossow.
This is not the first time that Rossow has systematically infiltrated cyber-criminals' networks. He also managed to take down the infamous botnet "Gameover Zeus" in a similar manner, on behalf of the US domestic intelligence service FBI. In the meantime, he has redesigned his bait to match the latest varieties of DDoS attacks. Cyber-criminals today no longer rely on vulnerable servers, but also attack networked televisions, webcams, or even refrigerators. The "Internet of Things" makes it possible.


Sunday, 12 March 2017

New data mining resource for organic materials available


Data mining resource for organic materials



A new, freely accessible database of organic and organometallic materials' electronic structures is now available online for research with quantum materials.
Published by the Condensed Matter research group at the Nordic Institute for Theoretical Physics (NORDITA) at KTH Royal Institute of Technology in Sweden, the Organic Materials Database is intended as a data mining resource for research into the electric and magnetic properties of crystals, which are primarily defined by their electronic band structure -- an energy spectrum of electrons motion which stem from their quantum-mechanical properties.
Computer calculation of such structures is difficult and demands large computational resources. But thanks to advances in computational power and a high demand for prediction of materials with target properties, a new way of dealing with quantum materials has developed. Materials informatics focuses on performing -- and developing tools for -- high-throughput computing and data mining.
"You can think of it as aggregate informatics analysis, where the properties of a single compound are captured approximately and resources are aimed toward understanding global trends within the large datasets," says Alexander Balatsky, Professor of Theoretical Physics at KTH.
Applications of this informatics-driven approach are wide-ranging and cover, for example, the search for various functional materials with special electrical, optical and magnetic properties, including the 2016 Nobel Prize-winning topological states of matter -- an important building block of a quantum computer.
The database will facilitate the first-principles investigation of organics and the prediction of organic functional materials, given their high potential for industrial applications, Balatsky says.
Electronic band structures are calculated using density functional theory that is a standard tool in modern materials science. The OMDB web interface allows users to search for materials with specified target properties using non-trivial queries about their electronic structure, including advanced tools for pattern recognition, chemical and physical properties search.
The project is already yielding results, including the discovery of new organic Dirac materials which were reported in two scientific papers by R. M. Geilhufe et al, Phys. Rev. B 95, 041103(R) (2017) and arXiv preprint (2016). There is also ongoing search for novel materials for organic solar cells, organic metals and semiconductors.
More information about the functionality and potential applications of the OMDB database can be found in the article by S. S. Borysov et al, "Organic Materials Database: an Open-Access Online Database for Data Mining," PLOS ONE, to be published 2017.
The database is supported by the Villum Center for Dirac Materials and Nordita. The computational resources are provided by the Max Planck Institute of Microstructure Physics in Halle (Germany) and the Swedish National Infrastructure for Computing (SNIC) at the National Supercomputer Centre at Linköping University.


Source:
KTH The Royal Institute of Technology

Saturday, 4 March 2017

Can math help explain our bodies -- and our diseases?


New model aims to combine the 'beauty' of mathematics with biology, to set the stage for future discovery



Using advanced mathematics, researchers hope to create models of biological systems that can inform our understanding of normal development and disease.

What makes a cluster of cells become a liver, or a muscle? How do our genes give rise to proteins, proteins to cells, and cells to tissues and organs?
The incredible complexity of how these biological systems interact boggles the mind -- and drives the work of biomedical scientists around the world.
But a pair of mathematicians has introduced a new way of thinking about these concepts that may help set the stage for better understanding of our bodies and other living things.
Writing in Proceedings of the National Academy of Sciences, the pair from the University of Michigan Medical School and University of California, Berkeley introduce a framework for using math to understand how genetic information and interactions between cells give rise to the actual function of a particular type of tissue.
They note it's a highly idealized framework -- not one that takes into account every detail of this process, called 'emergence of function.'
But by stepping back and making a simplified model based on mathematics, they hope to create a basis for scientists to understand the changes that happen over time within and between cells to make living tissues possible. It could also help with understanding of how diseases such as cancer can arise when things don't go as planned.
Beauty, combined
The pair -- U-M Medical School assistant professor of computational medicine Indika Rajapakse, Ph.D. and Berkeley professor emeritus Stephen Smale, Ph.D. -- have worked on the concepts for several years.
"All the time, this process is happening in our bodies, as cells are dying and arising, and yet they keep the function of the tissue going," says Rajapakse. "We need to use beautiful mathematics and beautiful biology together to understand the beauty of a tissue."
For the new work, they even hearken back to the work of Alan Turing, the pioneering British mathematician famous for his "Turing machine" computer that cracked Nazi codes during World War II.
Toward the end of his life, Turing began looking at the mathematical underpinnings of morphogenesis -- the process that allows natural patterns such as a zebra's stripes to develop as a living thing grows from an embryo to an adult.
"Our approach adapts Turing's technique, combining genome dynamics within the cell and the diffusion dynamics between cells," says Rajapakse, who leads the U-M 4D+ Genome Lab in the Department of Computational Medicine and Bioinformatics.
His team of biologists and engineers conduct experiments that capture human genome dynamics in three dimensions using biochemical methods and high resolution imaging. Rajapakse also holds an appointment in the U-M Department of Mathematics, part of the College of Literature, Science and the Arts.

Bringing math and the genome together:-

Smale, who retired from Berkeley but is still active in research, is considered a pioneer of modeling dynamical systems -- those that change over time and in space. He won the highest prize in mathematics, the Fields Medal, in 1966.
Several years ago, Rajapakse approached him during a visit to U-M, where Smale earned his undergraduate and graduate degrees. They began exploring how to study the human genome -- the set of genes in an organisms DNA -- as a dynamical system.
They based their work on the idea that while the genes of an organism remain the same throughout life, how cells use them does not.
Last spring, they published a paper that lays a mathematical foundation for gene regulation -- the process that governs how often and when genes get "read" by cells in order to make proteins.
"Neither Turing nor Steve Smale, when we began our work, knew about the genome," being classically trained mathematicians, says Rajapakse. "But using mathematical techniques, we can study the natural dynamics of the genomes of groups of cells as they develop and interact with one another, forming networks."
Instead of the nodes of those networks being static, as Turing assumed, the new work sees them as dynamical systems. The genes may be "hardwired" into the cell, but how they are expressed depends on factors such as epigenetic tags added as a result of environmental factors, and more.

Next steps:-

As a result of his work with Smale, Rajapakse now has funding from the Defense Advanced Research Projects Agency, or DARPA, to keep exploring the issue of emergence of function -- including what happens when the process changes.
Cancer, for instance, arises from a cell development and proliferation cycle gone awry. And the process by which induced pluripotent stem cells are made in a lab -- essentially turning back the clock on a cell type so that it regains the ability to become other cell types -- is another example.
Rajapakse aims to use data from real-world genome and cell biology experiments in his lab to inform future work, focused on cancer and cell reprogramming. This work will also include collaborations with fellow members of the U-M Translational Oncology Program and Thomas Ried, MD at the National Cancer Institute, with the goal of using mathematics to look at the latest products of basic research on cancer.
He's also organizing a gathering of mathematicians from around the world to look at computational biology and the genome this summer in Barcelona.
"The cell cycle is the most precise, beautiful thing," Rajapakse says. "When we have a clear mathematical understanding, we can create computer models and further explore the beauty of us, explained through mathematics."


Source - Michigan Medicine - University of Michigan

Friday, 3 March 2017

New software allows for 'decoding digital brain data'


Programmers got together to improve their ability to read the human mind


New software allows for 'decoding digital brain data' to reveal how neural activity gives rise to learning, memory and other cognitive functions. The software can be used in real time during an fMRI brain scan.

Early this year, about 30 neuroscientists and computer programmers got together to improve their ability to read the human mind


The hackathon was one of several that researchers from Princeton University and Intel, the largest maker of computer processors, organized to build software that can tell what a person is thinking in real time, while the person is thinking it.
The collaboration between researchers at Princeton and Intel has enabled rapid progress on the ability to decode digital brain data, scanned using functional magnetic resonance imaging (fMRI), to reveal how neural activity gives rise to learning, memory and other cognitive functions.
A review of computational advances toward decoding brain scans appears in the journal Nature Neuroscience, authored by researchers at the Princeton Neuroscience Institute and Princeton's departments of computer science and electrical engineering, together with colleagues at Intel Labs, a research arm of Intel.
"The capacity to monitor the brain in real time has tremendous potential for improving the diagnosis and treatment of brain disorders as well as for basic research on how the mind works," said Jonathan Cohen, the Robert Bendheim and Lynn Bendheim Thoman Professor in Neuroscience, co-director of the Princeton Neuroscience Institute, and one of the founding members of the collaboration with Intel.
Since the collaboration's inception two years ago, the researchers have whittled the time it takes to extract thoughts from brain scans from days down to less than a second, said Cohen, who is also a professor of psychology.
One type of experiment that is benefiting from real-time decoding of thoughts occurred during the hackathon. The study, designed by J. Benjamin Hutchinson, a former postdoctoral researcher in the Princeton Neuroscience Institute who is now an assistant professor at Northeastern University, aimed to explore activity in the brain when a person is paying attention to the environment, versus when his or her attention wanders to other thoughts or memories.
In the experiment, Hutchinson asked a research volunteer -- a graduate student lying in the fMRI scanner -- to look at a detail-filled picture of people in a crowded café. From his computer in the console room, Hutchinson could tell in real time whether the graduate student was paying attention to the picture or whether her mind was drifting to internal thoughts. Hutchinson could then give the graduate student feedback on how well she was paying attention by making the picture clearer and stronger in color when her mind was focused on the picture, and fading the picture when her attention drifted.
The ongoing collaboration has benefited neuroscientists who want to learn more about the brain and computer scientists who want to design more efficient computer algorithms and processing methods to rapidly sort through large data sets, according to Theodore Willke, a senior principal engineer at Intel Labs in Hillsboro, Oregon, and head of Intel's Mind's Eye Lab. Willke directs Intel's part of the collaborative team.
"Intel was interested in working on emerging applications for high-performance computing, and the collaboration with Princeton provided us with new challenges," Willke said. "We also hope to export what we learn from studies of human intelligence and cognition to machine learning and artificial intelligence, with the goal of advancing other important objectives, such as safer autonomous driving, quicker drug discovery and ealier detection of cancer."
Since the invention of fMRI two decades ago, researchers have been improving the ability to sift through the enormous amounts of data in each scan. An fMRI scanner captures signals from changes in blood flow that happen in the brain from moment to moment as we are thinking. But reading from these measurements the actual thoughts a person is having is a challenge, and doing it in real time is even more challenging.
A number of techniques for processing these data have been developed at Princeton and other institutions. For example, work by Peter Ramadge, the Gordon Y.S. Wu Professor of Engineering and professor of electrical engineering at Princeton, has enabled researchers to identify brain activity patterns that correlate to thoughts by combining data from brain scans from multiple people. Designing computerized instructions, or algorithms, to carry out these analyses continues to be a major area of research.
Powerful high-performance computers help cut down the time that it takes to do these analyses by breaking the task up into chunks that can be processed in parallel. The combination of better algorithms and parallel computing is what enabled the collaboration to achieve real-time brain scan processing, according to Kai Li, Princeton's Paul M. Wythes '55 P86 and Marcia R. Wythes P86 Professor in Computer Science and one of the founders of the collaboration.
Since the beginning of the collaboration in 2015, Intel has contributed to Princeton more than $1.5 million in computer hardware and support for Princeton graduate students and postdoctoral researchers. Intel also employs 10 computer scientists who work on this project with Princeton, and these experts work closely with Princeton faculty, students and postdocs to improve the software.
These algorithms locate thoughts within the data by using machine learning, the same technique that facial recognition software uses to help find friends in social media platforms such as Facebook. Machine learning involves exposing computers to enough examples so that the computers can classify new objects that they've never seen before.
One of the results of the collaboration has been the creation of a software toolbox, called the Brain Imaging Analysis Kit (BrainIAK), that is openly available via the Internet to any researchers looking to process fMRI data. The team is now working on building a real-time analysis service. "The idea is that even researchers who don't have access to high-performance computers, or who don't know how to write software to run their analyses on these computers, would be able to use these tools to decode brain scans in real time," said Li.
What these scientists learn about the brain may eventually help individuals combat difficulties with paying attention, or other conditions that benefit from immediate feedback.
For example, real-time feedback may help patients train their brains to weaken intrusive memories. While such "brain-training" approaches need additional validation to make sure that the brain is learning new patterns and not just becoming good at doing the training exercise, these feedback approaches offer the potential for new therapies, Cohen said. Real-time analysis of the brain could also help clinicians make diagnoses, he said.
The ability to decode the brain in real time also has applications in basic brain research, said Kenneth Norman, professor of psychology and the Princeton Neuroscience Institute. "As cognitive neuroscientists, we're interested in learning how the brain gives rise to thinking," said Norman. "Being able to do this in real time vastly increases the range of science that we can do," he said.
Another way the technology can be used is in studies of how we learn. For example, when a person listens to a math lecture, certain neural patterns are activated. Researchers could look at the neural patterns of people who understand the math lecture and see how they differ from neural patterns of someone who isn't following along as well, according to Norman.
The ongoing collaboration is now focused on improving the technology to obtain a clearer window into what people are thinking about, for example, decoding in real time the specific identity of a face that a person is mentally visualizing.
One of the challenges the computer scientists had to overcome was how to apply machine learning to the type of data generated by brain scans. A face-recognition algorithm can scan hundreds of thousands of photographs to learn how to classify new faces, but the logistics of scanning peoples' brains are such that researchers usually only have access to a few hundred scans per person.
Although the number of scans is few, each scan contains a rich trove of data. The software divides the brain images into little cubes, each about one millimeter wide. These cubes, called voxels, are analogous to the pixels in a two-dimensional picture. The brain activity in each cube is constantly changing.
To make matters more complex, it is the connections between brain regions that give rise to our thoughts. A typical scan can contain 100,000 voxels, and if each voxel can talk to all the other voxels, the number of possible conversations is immense. And these conversations are changing second by second. The collaboration of Intel and Princeton computer scientists overcame this computational challenge. The effort included Li as well as Barbara Engelhardt, assistant professor of computer science, and Yida Wang, who earned his doctorate in computer science from Princeton in 2016 and now works at Intel Labs.
Prior to the recent progress, it would take researchers months to analyze a data set, said Nicholas Turk-Browne, professor of psychology at Princeton. With the availability of real-time fMRI, a researcher can change the experiment while it is ongoing. "If my hypothesis concerns a certain region of the brain and I detect in real time that my experiment is not engaging that brain region, then we can change what we ask the research volunteer to do to better engage that region, potentially saving precious time and accelerating scientific discovery," Turk-Browne said.
One eventual goal is to be able to create pictures from people's thoughts, said Turk-Browne. "If you are in the scanner and you are retrieving a special memory, such as from childhood, we would hope to generate a photograph of that experience on the screen. That is still far off, but we are making good progress."

Source - Princeton University

Thursday, 2 March 2017

You can hear the FM radio of that place


AMAZING TECHNOLOGY :-


Just point your finger in any of these green dots in the globe ... You can hear the FM radio of that place



http://radio.garden/live

Thursday, 23 February 2017

New reliable technique to track web users across browsers


Fingerprinting technique to use machine-level features to identify users:-


Novel method links user information across browsers using operating system and hardware level factors identifies 99.24 percent of users compared to 90.84 percent for top single-browser fingerprinting techniques




For good or ill, what users do on the web is tracked. Banks track users as an authentication technique, to offer their customers enhanced security protection. Retailers track customers and potential customers in order to deliver personalized service tailored to their tastes and needs.
The method commonly used for tracking is called web fingerprinting. Web fingerprinting is a way of collecting information that can be used to fully or partially identify a given user, even when cookies are disabled.
Such techniques have been evolving quickly. Yet, the most advanced and commonly used methods track users in a single browser only.
Now a team of researchers led by Yinzhi Cao , assistant professor computer science and engineering at Lehigh University (Bethlehem, PA) -- and including graduate student Song Li, also of Lehigh University and Erik Wijmans of Washington University in St. Lous -- has developed the first cross-browser fingerprinting technique to use machine-level features to identify users. The work is described in a paper called: "(Cross-) Browser Fingerprinting via OS and Hardware Level Features." Cao and his colleagues are scheduled to present their findings at the Internet Society's Network and Distributed System Security (NDSS) Symposium next week, February 26 through March 1 in San Diego, CA.
The authors write: "Our principal contribution is being the first to use many novel OS and hardware features, especially computer graphics ones, in both single- and cross-browser fingerprinting. Particularly, our approach with new features can successfully fingerprint 99.24% of users as opposed to 90.84% for AmIUnique, i.e., state of the art, on the same dataset for single-browser fingerprinting."
In addition, their technique can achieve higher uniqueness rates than the only cross-browser approach in the literature with similar stability.
"The only other cross-browser fingerprinting work uses IP address as the main feature by which to identify users," says Cao. "This method has been criticized as too unstable as people use the internet at home, work and on different devices. Using an IP address is too dynamic and unreliable."
Cao's novel approach adopts OS and hardware levels features including graphic cards exposed by WebGL, audio stack by Audio-Context, and CPU by hardwareConcurrency. In addition to being able to uniquely identify more users than AmIUnique for single-browser fingerprinting, and the only other cross-browser fingerprinting technique in the literature, their approach is highly reliable. According to their study, the removal of any single feature only decreases the accuracy by at most 0.3%.
The team used crowdsourcing for data collection, asking participants to visit their website using two different browsers of their choice and incentivizing them to use the third browser by offering additional payment.
According to Cao, the ideal next step for this work would be for a financial institution to adopt the approach as a way to provide multi-factor authentication for their customers.