Master the "dangers of information" in synthetic biology research



[ad_1]

In 2016, synthetic biologists reconstructed a potentially extinct disease, known as smallpox, using a matching DNA for approximately $ 100,000. The experiment was strictly for research purposes and the disease itself is safe for humans. But the published results, including the methodology, have raised fears that a deleterious agent, with the appropriate resources, could cause a pandemic. In an editorial published today in PLOS pathogensKevin Esvelt, professor at the Media Lab, who develops and studies gene editing techniques, advocates for stricter biosafety and greater research transparency to control these "information dangers" – published information that can be used to cause damage. Esvelt spoke with MIT News about his ideas.

Q: What are the dangers of information and why are they an important topic in synthetic biology?

A: Our company is not comfortable with this idea that some information is dangerous, but it is unfortunately true. No one believes that plans for nuclear weapons should be public, but we collectively believe that the genome sequences of viruses should be public. This was not a problem until DNA synthesis became really good. The current system of regulation of dangerous biological agents is bypassed by the synthesis of DNA. The synthesis of DNA becomes accessible to a wide range of people and the instructions for doing unpleasant things are available for free online.

In the hippocampus study, for example, the risk of information lies partly in the document and in the methods described. But it's also in the media that cover and emphasize that something bad can be done. And this is compounded by the alarmed people, because we are talking to reporters about the potential danger, and that only feeds the problem. As critics of these things, we also spread information risks.

Part of the solution is simply to recognize that transparency of information has costs and to take steps to minimize it. This means that we need to raise awareness of the dangers of information and be a little more cautious when we talk about hazardous work, and in particular to quote it. Information risks are a "tragedy of the commons" problem. Everyone thinks that, if it is already published, another quote will not hurt. But everyone thinks that way. It continues to grow until it's on Wikipedia.

Q: You say that one of the problems of synthetic biology is the screening of DNA for potentially dangerous sequences. How can cryptography help promote a "clean" DNA market?

A: We really need to do something about the ease of DNA synthesis and the accessibility of potential pandemic pathogens. The obvious solution is to set up some kind of screening for all the DNA synthesis. The IGSC (International Gene Synthesis Consortium) was set up by industry leaders in the field of DNA synthesis after post-anthrax attacks. To be a member, a company must demonstrate that it filters its orders, but member companies only cover 80% of the commercial market and no synthetic facilities of large companies. And there is no external way to verify that the IGSC companies are filtering or selecting the appropriate elements.

We need a more centralized system, in which all the DNA synthesis in the world is controlled autonomously and would be approved for synthesis only if harmful sequences were found in any from among them. This is a cryptography problem.

On one side, you have trade secrets because companies that generate DNA do not want others to know what they are manufacturing. On the other hand, you have a risk database that should be useless in case of theft. You want to encrypt commands, send them to a central database, and then find out whether they are secure or not. Then you need a system that allows users to add items to the database, which can be done privately. This is totally achievable with modern cryptography. You can use what's called hashes [which converts inputs of letters and numbers into an encrypted output of a fixed sequence] or use a more recent fully homomorphic encryption method, which allows you to perform calculations on encrypted data without ever decrypting them.

We are just starting to work on this challenge now. A point of this PLOS Pathogens The platform is to lay the foundation for this system.

In the long run, authorized experts can add risks to their own databases. This is the ideal way to manage information risks. If I think of a sequence that, in my opinion, is very dangerous and that people should not do it; ideally, I could contribute to a database, possibly in association with only one other authorized user. This would ensure that no one else exactly executes this sequence without unduly spreading the dangerous information about its identity and its potential nature.

Q: You plead for peer review during the early stages of research. How would this help prevent information risks?

A: The study on the hippocampus was controversial as to whether the benefits outweighed the risks. It has been said that one of the advantages is to point out that viruses can be created from nothing. In oncology viral therapy, where you make viruses to kill cancer, [this information] could speed up their research. It has also been postulated that chicken pox could be used to make a better vaccine, but researchers would not be able to access a sample. Those can be true. It is always a risk of clear information. Could this aspect have been avoided?

Ideally, the study on the hippocampus would have been examined by other experts, including some who were concerned about its implications and who could have indicated, for example, that you could have created a virus without parents dangerous be an example – or do some for the development of vaccines, and then simply not specified that you have made it from scratch. Then you would have had all the benefits of the research study, without creating any risk of information. This would have been possible to the extent that other experts would have had the opportunity to review the research plan before the experiments were conducted.

With the current process, this is usually just a peer review at the end of the research. There is no feedback to the design phase of the research. The moment when peer review would be most useful would be at this point. This transition requires donors, journals and governments to come together to change [the process] in small subfields. In fields clearly free of information risks, you can publicly pre-record your research plans and solicit your comments. In areas such as synthetic mammal virology with obvious risks, you would want research plans sent to a pair of peer reviewers in the field for assessment, safety, and suggestions for improvement. Very often, there is a better way to experience than you originally imagined, and if they can point it out early, so much the better. I think both models will result in faster science, which we want as well.

Universities could start by putting in place a special internal peer review process for genetics. [a genetic engineering technology] and mammalian virology experiments. As a scientist working in these two areas, I would be happy to participate. The question is: how can we do [synthetic biology] in order to continue or even accelerate beneficial discoveries while avoiding those with potentially catastrophic consequences?

[ad_2]
Source link