Paper Summary. A Berkeley Stance Of Systems Challenges For Ai
This seat newspaper from Berkeley identifies an agenda for systems enquiry inward AI for the adjacent 10 years. The newspaper too serves to publicize/showcase their research, too steer involvement towards these directions, which is why y'all actually write seat papers.
The newspaper motivates the systems agenda past times discussing how systems research/development played a crucial role inward fueling AI’s recent success. It says that the remarkable progress inward AI has been made possible past times a "perfect storm" emerging over the past times 2 decades, bringing together: (1) massive amounts of data, (2) scalable figurer too software systems, too (3) the broad accessibility of these technologies.
The residual of the newspaper talks nearly the trends inward AI too how those map to their systems enquiry agenda for AI.
Research: (1) Build systems for RL that fully exploit parallelism, piece allowing dynamic task graphs, providing millisecond-level latencies, too running on heterogeneous hardware nether stringent deadlines. (2) Build systems that tin faithfully copy the real-world environment, every bit the surroundings changes continually too unexpectedly, too run faster than existent time.
https://christmasloveday.blogspot.com//search?q=paper-summary-real-time-machine
Of course, the instant constituent hither refers to enquiry described inward "Real-Time Machine Learning: The Missing Pieces". Simulated Reality (SR) focuses on continually simulating the physical basis alongside which the agent is interacting. Trying to copy multiple possible futures of a physical surroundings inward high fidelity within a twain milliseconds is a really ambitious goal. But enquiry hither tin too assistance other fields, thence this is interesting.
Research: (1) Build fine grained provenance back upwards into AI systems to connect trial changes (e.g., payoff or state) to the information sources that caused these changes, too automatically larn causal, source-specific dissonance models. (2) Design API too linguistic communication back upwards for developing systems that hold confidence intervals for decision-making, too inward item tin flag unforeseen inputs.
Research: Build AI systems that tin back upwards interactive diagnostic analysis, that faithfully replay past times executions, too that tin assistance to create upwards one's hear the features of the input that are responsible for a item decision, mayhap past times replaying the determination task against past times perturbed inputs. More generally, supply systems back upwards for causal inference.
Research: Build AI systems that leverage secure enclaves to ensure information confidentiality, user privacy too determination integrity, mayhap past times splitting the AI system’s code betwixt a minimal code base of operations running within the enclave, too code running exterior the enclave. Ensure the code within the enclave does non leak information, or compromise determination integrity.
Research: Build AI systems that are robust against adversarial inputs both during grooming too prediction (e.g., determination making), mayhap past times designing novel machine learning models too network architectures, leveraging provenance to runway downwardly fraudulent information sources, too replaying to redo decisions later eliminating the fraudulent sources.
Research: Build AI systems that (1) tin larn across multiple information sources without leaking information from a information root during grooming or serving, too (2) supply incentives to potentially competing organizations to portion their information or models.
Research: (1) Design domain-specific hardware architectures to amend the performance too cut down ability consumption of AI applications past times orders of magnitude, or heighten the security of these applications. (2) Design AI software systems to receive got payoff of these domain-specific architectures, resources disaggregation architectures, too hereafter non-volatile storage technologies.
Research: Design AI systems too APIs that let the composition of models too actions inward a modular and flexible manner, too develop rich libraries of models too options using these APIs to dramatically simplify the evolution of AI applications.
Research: Design cloud-edge AI systems that (1) leverage the border to cut down latency, amend security too security, too implement intelligent information retention techniques, too (2) leverage the cloud to portion information too models across border devices, prepare sophisticated computation-intensive models, too receive got high lineament decisions.
1) In 2009, at that spot was a similar seat newspaper from Berkeley called "Above the Clouds: A Berkeley View of Cloud Computing". That newspaper did a really goodness task of summarizing, framing, too selling the cloud computing sentiment to the academia. But it looks similar the enquiry agenda/directions from that written report didn't fare really good later 8 years---which is totally expected. Plans are useless exactly planning is indispensable. The areas of involvement modify later some fourth dimension too the enquiry adapts to it. It is impossible to tightly computer programme too care exploratory enquiry inward CS areas (maybe this is unlike inward biological scientific discipline too sciences areas.)
I intend it is a YES for items 4, 5, 6, too partial for the rest, alongside really footling progress inward items 2 too 9. While the opportunities did non include them, the next developments receive got since reshaped the cloud computing landscape:
So fifty-fifty though the AI-systems agenda from Berkeley makes a lot of sense, it volition live on instructive to sentinel how these pan out too what unexpected large AI-systems areas opened upwards up inward the coming years.
2) Stanford too released a similar seat newspaper before this year, although theirs was for a express scope/question for developing a [re]usable infrastructure for ML. Stanford's DAWN projection aims to target end-to-end ML workflows, empower domain experts, too optimize end-to-end. This figure summarizes their vision for the reusable ML stack:
Of course, again, this inevitably reflects the strengths too biases of the Stanford team; they are to a greater extent than on the database, datascience, production side of things. It looks similar this has some commonalities alongside the AI-specific architectures department of the Berkeley report, exactly unlike approaches are proposed for the same questions.
3) For R2: Robust decisions, it seems similar formal methods, modeling, invariant-based reasoning, tin live on useful, peculiarly when concurrency command becomes an number inward distributed ML deployments.
The newspaper motivates the systems agenda past times discussing how systems research/development played a crucial role inward fueling AI’s recent success. It says that the remarkable progress inward AI has been made possible past times a "perfect storm" emerging over the past times 2 decades, bringing together: (1) massive amounts of data, (2) scalable figurer too software systems, too (3) the broad accessibility of these technologies.
The residual of the newspaper talks nearly the trends inward AI too how those map to their systems enquiry agenda for AI.
Trends too challenges
The newspaper identifies four basic trends inward the AI area:- Mission-critical AI: Design AI systems that larn continually past times interacting alongside a dynamic surroundings inward a timely, robust, too secure manner.
- Personalized AI: Design AI systems that enable personalized applications too services piece respecting users’ privacy too security.
- AI across organizations: Design AI systems that tin prepare on datasets owned past times unlike organizations without compromising their confidentiality. (I intend it was possible to simplify presentation past times combining this alongside the Personalized AI.)
- AI demands outpacing the Moore’s Law: Develop domain-specific architectures too distributed software systems to address the performance needs of hereafter AI applications inward the post-Moore’s Law era.
Acting inward dynamic environments
R1: Continual learning
Despite Reinforcement Learning (RL)'s successes (Atari games, AlphaGo inward chess too Go games), RL has non seen widescale real-world application. The newspaper argues that coupling advances inward RL algorithms alongside innovations inward systems pattern volition drive novel RL applications.Research: (1) Build systems for RL that fully exploit parallelism, piece allowing dynamic task graphs, providing millisecond-level latencies, too running on heterogeneous hardware nether stringent deadlines. (2) Build systems that tin faithfully copy the real-world environment, every bit the surroundings changes continually too unexpectedly, too run faster than existent time.
https://christmasloveday.blogspot.com//search?q=paper-summary-real-time-machine
Of course, the instant constituent hither refers to enquiry described inward "Real-Time Machine Learning: The Missing Pieces". Simulated Reality (SR) focuses on continually simulating the physical basis alongside which the agent is interacting. Trying to copy multiple possible futures of a physical surroundings inward high fidelity within a twain milliseconds is a really ambitious goal. But enquiry hither tin too assistance other fields, thence this is interesting.
R2: Robust decisions
The challenges hither are: (1) robust learning inward the presence of noisy too adversarial feedback, too (2) robust decision-making inward the presence of unforeseen too adversarial inputs.Research: (1) Build fine grained provenance back upwards into AI systems to connect trial changes (e.g., payoff or state) to the information sources that caused these changes, too automatically larn causal, source-specific dissonance models. (2) Design API too linguistic communication back upwards for developing systems that hold confidence intervals for decision-making, too inward item tin flag unforeseen inputs.
R3: Explainable decisions
Here nosotros are inward the domain of causal inference, a land "which volition live on essential inward many hereafter AI applications, too i which has natural connections to diagnostics too provenance ideas inward databases."Research: Build AI systems that tin back upwards interactive diagnostic analysis, that faithfully replay past times executions, too that tin assistance to create upwards one's hear the features of the input that are responsible for a item decision, mayhap past times replaying the determination task against past times perturbed inputs. More generally, supply systems back upwards for causal inference.
Secure AI
R4: Secure enclaves
A secure enclave is a secure execution environment—which protects the application running within from malicious code running outside.Research: Build AI systems that leverage secure enclaves to ensure information confidentiality, user privacy too determination integrity, mayhap past times splitting the AI system’s code betwixt a minimal code base of operations running within the enclave, too code running exterior the enclave. Ensure the code within the enclave does non leak information, or compromise determination integrity.
R5: Adversarial learning
The adaptive nature of ML algorithms opens the learning systems to novel categories of attacks: evasion attacks too information poisoning attacks.Research: Build AI systems that are robust against adversarial inputs both during grooming too prediction (e.g., determination making), mayhap past times designing novel machine learning models too network architectures, leveraging provenance to runway downwardly fraudulent information sources, too replaying to redo decisions later eliminating the fraudulent sources.
R6: Shared learning on confidential data
The newspaper observes that, despite the large book of theoretical research, at that spot are few practical differential privacy systems inward usage today, too proposes to simplify differential privacy usage for real-world applications.Research: Build AI systems that (1) tin larn across multiple information sources without leaking information from a information root during grooming or serving, too (2) supply incentives to potentially competing organizations to portion their information or models.
AI specific architectures
R7: Domain specific hardware
The newspaper argues that "the i path left to proceed the improvements inward performance-energy-cost of processors is developing domain-specific processors." It mentions the Berkeley Firebox project, which proposes a multi-rack supercomputer that connects thousands of processor chips alongside thousands of DRAM chips too nonvolatile storage chips using fiber optics to supply low-latency, high-bandwidth, too long physical distance.Research: (1) Design domain-specific hardware architectures to amend the performance too cut down ability consumption of AI applications past times orders of magnitude, or heighten the security of these applications. (2) Design AI software systems to receive got payoff of these domain-specific architectures, resources disaggregation architectures, too hereafter non-volatile storage technologies.
R8: Composable AI systems
The newspaper says modularity too composition volition live on fundamental to increasing evolution speed too adoption of AI. The newspaper cites the Clipper project.Research: Design AI systems too APIs that let the composition of models too actions inward a modular and flexible manner, too develop rich libraries of models too options using these APIs to dramatically simplify the evolution of AI applications.
R9: Cloud-edge systems
The newspaper mentions the demand to repurpose code to multiple heterogeneous platforms via re-targetable software pattern too compiler technology. It says "To address the broad heterogeneity of border devices too the relative difficulty of upgrading the applications running on these devices, nosotros demand novel software stacks that abstract away the heterogeneity of devices past times exposing the hardware capabilities to the application through mutual APIs."Research: Design cloud-edge AI systems that (1) leverage the border to cut down latency, amend security too security, too implement intelligent information retention techniques, too (2) leverage the cloud to portion information too models across border devices, prepare sophisticated computation-intensive models, too receive got high lineament decisions.
MAD questions
(The questions that led to these explanations are left every bit an practice to the reader.)1) In 2009, at that spot was a similar seat newspaper from Berkeley called "Above the Clouds: A Berkeley View of Cloud Computing". That newspaper did a really goodness task of summarizing, framing, too selling the cloud computing sentiment to the academia. But it looks similar the enquiry agenda/directions from that written report didn't fare really good later 8 years---which is totally expected. Plans are useless exactly planning is indispensable. The areas of involvement modify later some fourth dimension too the enquiry adapts to it. It is impossible to tightly computer programme too care exploratory enquiry inward CS areas (maybe this is unlike inward biological scientific discipline too sciences areas.)
I intend it is a YES for items 4, 5, 6, too partial for the rest, alongside really footling progress inward items 2 too 9. While the opportunities did non include them, the next developments receive got since reshaped the cloud computing landscape:
- dominance of machine learning workloads inward the cloud
- the ascension of NewSQL systems, the tendency for to a greater extent than consistent distributed databases, too the importance of coordination/Paxos/ZooKeeper inward the cloud
- the evolution of online in-memory dataflow too current processing systems, such every bit Spark, which came out of Berkeley
- the race towards finer-granularity virtualization via containers too functions every bit a service
- the prominence of SLAs (mentioned alone in i trial inward the paper)
So fifty-fifty though the AI-systems agenda from Berkeley makes a lot of sense, it volition live on instructive to sentinel how these pan out too what unexpected large AI-systems areas opened upwards up inward the coming years.
2) Stanford too released a similar seat newspaper before this year, although theirs was for a express scope/question for developing a [re]usable infrastructure for ML. Stanford's DAWN projection aims to target end-to-end ML workflows, empower domain experts, too optimize end-to-end. This figure summarizes their vision for the reusable ML stack:
Of course, again, this inevitably reflects the strengths too biases of the Stanford team; they are to a greater extent than on the database, datascience, production side of things. It looks similar this has some commonalities alongside the AI-specific architectures department of the Berkeley report, exactly unlike approaches are proposed for the same questions.
3) For R2: Robust decisions, it seems similar formal methods, modeling, invariant-based reasoning, tin live on useful, peculiarly when concurrency command becomes an number inward distributed ML deployments.
0 Response to "Paper Summary. A Berkeley Stance Of Systems Challenges For Ai"
Post a Comment