Lessons from our Pupils: A Reflection [Podcast Episode 150]

For this week’s blog post related to Podcast Episode 150, we are going to do something different and look at one of the resources patients can use to find clinical trials and how to navigate it. ClinicalTrials.gov is a a database of trials conducted around the world that are privately and publicly funded that can be used to find trials for a condition or disease. Like Dr. Sridhar and Dr. Pennesi, it can be difficult for a patient to look up information due to the technical terminology used. Today we are going to talk about how to navigate through it, and what certain terms mean.

When first opening the website, the following screen is the first thing that you see:

1.png

Image Credit: https://clinicaltrials.gov/ct2/home

This is the simple search format they offer. The first thing that must be selected is the status of the study. Here you can select for studies that are recruiting participants or those that have not yet started to recruit participants (“recruiting and not yet recruiting studies”) or all studies, including those that are no longer recruiting, are suspended, terminated, completed or more. The next box to be filled out is that of the disease or condition of interest. For example, if you are looking for studies related to X-linked retinoschisis, this is where it can be specified. The third box allows you to narrow your search and include things like a drug name, the name of the investigator, or the NCT number which is the National Clinical Trial identifier given to each registered clinical trial. Finally, in the last box the search can be narrowed to a desired country. 

For this post, we will be looking at clinical trials related to X-linked retinoschisis. After selecting “All studies” and specifying the condition or disease as “X-linked retinoschisis”, the search returns the following widnow:

2.png

Image Credit: https://clinicaltrials.gov/ct2/home

As can be seen, all the clinical trials available for X-linked retinoschisis appear here. Each study shows the status, title, the condition they are studying, the intervention they are using, and the location.  On the left-hand side, there is a filter panel that can help narrow results. Once again, you can filter by the status of the study by selecting the options you are interested in. Next is the eligibility criteria. These are the key requirements that people who want to participate in the study must meet. Each study will have different criteria including age and sex. Following eligibility criteria, you can narrow results by study type. An interventional or clinical trial is a type of study in which the effect of an intervention or treatment is studied by assigning a participant to a group. In an observational study the participants are assessed for certain health outcomes but are not assigned to a specific intervention or treatment. The last type of study is an expanded access study, and these are ways for patients with serious diseases or conditions who can’t participate in a clinical trial to receive medical products that have not yet been approved by the FDA. You can select studies that have or do not have results, and the phase they are on. A phase 1 study is usually conducted with healthy volunteers with the goal of determining a drug’s most serious side effects and how the drug is broken down in the body and excreted. In phase 2 the focus is on whether the drug works in people with a certain condition or disease. Phase 3 aims to find more information regarding the drug’s safety and effectiveness at different dosages. The last phase is phase 4, which happens after the drug has been approved by the FDA for marketing and includes further studying the safety, efficacy, and how to best utilize this drug. Lastly, the funder type and study documents can be selected.

Once you find a trial of interest, you can learn more information by clicking on the study title. The new window will give a background information to the study and talk about what it is trying to accomplish and how, what intervention or drug they are using, and a brief section on what will be expected of the participant. It will also list the study start and end date, any inclusion and exclusion criteria that must be met in order to participate, and contact information for those interested in participating in the trial if it is still recruiting participants.

              -Amy Kloosterboer

Jayanth SridharComment
Lessons from our Pupils: A Reflection [Podcast Episode 149]

In Episode 149 (LINK), Jay was joined by Dr. Jean-Pierre Hubschman to discuss robotic surgery and its future in the field of Ophthalmology. For this post, we thought it would be interesting to look back at how robotic surgery developed and where it stands today.

The introduction of robotics in industry began in 1951 with the first mechanical arm constructed to handle radioactive material. Ten years later, the first industrial robot was constructed for General Motors. In medicine, it was not until 1983 that a robot would be utilized to assist in a surgery. The Arhtrobot, designed in Vancouver, was used in orthopedic surgical procedures and performed over 60 arthroscopic procedures. Two years later, the PUMA 200 was used to perform a CT-guided brain biopsy (Figure 1). This was so successful, that it started being used for urological procedures at the Imperial College in London, in 1988. Two different models were used for prostate surgery that had the same limitations: the robots could be programmed based on a fixed anatomical landmark but could not be used for dynamic surgical targets.   

 

During the 1990’s the Automated Endoscopic System for Optimal Positioning (AESOP) was built. This endoscopic camera could be controlled by the surgeon’s voice commands and was utilized for a variety of surgeries including laparoscopic cholecystectomy, hernioplasty, fundoplication, and colectomy. This robotic model was taken a step further with Zeus, a system created with arms and surgical instruments that could be controlled by the surgeon (Figure 2). Zeus was used for the first time in 1998 at the Cleveland Clinic for a fallopian tube anastomosis. In 2001 this model was used for the first transatlantic surgery, a laparoscopy performed in Strasbourg while the surgeon, Dr. Jacques Marescaux, was in New York.



Figure 2: Zeus robotic surgical system.    a.    surgeon console,    b.    robotic arms.    Image credit    https://link.springer.com/article/10.1007%2Fs00268-016-3543-9

Figure 2: Zeus robotic surgical system. a. surgeon console, b. robotic arms.

Image credit https://link.springer.com/article/10.1007%2Fs00268-016-3543-9

Around the same time the da Vinci system was designed. First used in 1997 for a robotic-assisted cholecystectomy in Brussels, Belgium this model gained popularity and in 2000 the FDA approved it for abdominal surgeries. This model overcame many of the previous robots’ limitations. It could now replicate exactly what a human arm could do. The system consists of three parts: a Vision System that includes a high-definition 3D endoscope and large viewing monitor, a Patientside Cart with the robotic arms controlled by the surgeon, and the Surgeon Console from where he or she performs the surgery. Throughout the years this system has been upgraded. Most notably, in 2002 the robot consisted of three operating arms. In 2006 the new model gave better handling and increased range of motion, allowing for a bigger surgical field. Finally, in 2009 the imaging system was upgraded, and a second surgeon’s console was added to allow less experienced surgeons to train. While robotic surgery is now commonly being used in many different fields of medicine—including neurosurgery, GI, urology, orthopedic surgery and more—it is still in its infancy in ophthalmology. As technology progresses, robotics are expected to be introduced more into ophthalmic practice. As discussed in this episode, the two areas that are expected to first see the use of robotics will be vitreoretinal and cataract surgeries.

 

Figure 3: da Vinci Surgical System.    Image credit:    https://www.davincisurgery.com/da-vinci-surgery/da-vinci-surgical-system/

Figure 3: da Vinci Surgical System.

Image credit: https://www.davincisurgery.com/da-vinci-surgery/da-vinci-surgical-system/

Jayanth SridharComment
Lessons from our Pupils: A Reflection [Podcast Episode 148]
Typical optical setup of single point OCT. Scanning the light beam on the sample enables non-invasive cross-sectional imaging up to 3 mm in depth with micrometer resolution. Image Credit:  https://en.wikipedia.org/wiki/Optical_coherence_tomography#Theory

Typical optical setup of single point OCT. Scanning the light beam on the sample enables non-invasive cross-sectional imaging up to 3 mm in depth with micrometer resolution. Image Credit: https://en.wikipedia.org/wiki/Optical_coherence_tomography#Theory

Medicine is constantly progressing, in part due to technological advances of today’s digital age. In ophthalmology, one of the most important recent technological advances has been the development of optical coherence tomography, or OCT. Within the past ~25 years OCT has risen to be ubiquitous in the field, with applications in nearly all sub-specialties and for countless clinical purposes. Given our recent Podcast Episode’s (link - http://www.retinapodcast.com/episodes/2019/1/6/episode-148-january-2019-retinal-physician-review-including-digital-imaging-in-vr-surgery-surgery-in-rop-eyes-vkh-achromatopsia) discussion on the growing use of intraoperative OCT, for today’s Lessons from our Pupils blog post, we wanted to take a look at the history of OCT and to cover some basics of the science behind it.

Optical coherence tomography – the name itself gives the reader a good idea about the principles at play. Optical, in the world of physics, suggests involvement of the visible portion of the electromagnetic spectrum. Coherence refers to the state in which two waves are in sync (called “in-phase”) with each other. And finally, tomography refers to imaging through slices (“sectioning”). We could assume that OCT, then, should involve the use of two waves of light (and whether or not they are in-phase) to image a slice of an object.

At this point, we should probably wrap up this blog post, since we already went over all there is to know about OCT! For those who would like to keep reading, let’s dive into some more specifics. As you would expect with the word “coherence,” OCT requires the comparison of two different waves of light, produced by a beam splitter inside the machine. One beam travels to a “reference mirror” while the other beam travels to the sample you are trying to image. When the beams of light return to the device, they are merged together in a beam reducer and analyzed using a photo detector. Depending on how in-phase or out-of-phase the two beams of light are when they return to be merged, the computer can assign different intensities to that portion of the sample, which is then repeated thousands and thousands of times to create the final image.

The specifics of this process, however, varies based on OCT type. There exist two “domains” for analyzing coherence: the time domain (TD), and the frequency domain (FD). In the time domain, the time it takes for the beam of light to travel to the reference mirror and back to the photo collector (the “reference arm pathlength” is altered through movement of the reference mirror; this allows “scanning” different depths of your tissue sample. In the frequency domain, different frequencies of light (which penetrate to different depths of the tissue sample) are included in each beam, and these frequencies are detected in parallel using spectrally-separated detectors; this allows for much greater speed of analysis, since the reference mirror does not need to be moved for different sample depths to be analyzed. While the initial TD-OCT systems could perform 400 axial scans (A-scans) per second (low due to the need to move the reference mirror), use of the FD for Spectral Domain OCT (SD-OCT) can be performed at ~300,000 A-scans/second (though most clinical systems operate at rates below this). Since we can acquire images so much more quickly, we are now able to perform three-dimensional scans of tissue.

Although you may not deal with the physics behind OCT in your daily clinic, it is always nice to learn a little bit about what goes into the technology that we use. We hope that you enjoyed reading about OCT and we are excited to see just how far this technology will go, both in the field of ophthalmology and beyond.

- Michael Venincasa

For more information, you may be interested in: 

Invest Ophthalmol Vis Sci. 2011 Apr; 52(5): 2425–2436. doi: 10.1167/iovs.10-6312

https://en.wikipedia.org/wiki/Optical_coherence_tomography#Theory

Jayanth SridharComment
Lessons from our Pupils: A Reflection [Podcast Episode 147]

In January 2nd’s Episode (LINK), Jay was joined by Dr. Allen Ho to discuss his recently published paper in Nature Medicine regarding the successful use of an intravitreal oligonucleotide for a form of Leber Congenital Amaurosis (LCA). Dr. Matthew Weed then joined the podcast to discuss the article and compare previously used therapies like Luxturna. In this post we are going to review what Leber Congenital Amaurosis is and how this new oligonucleotide functions to rescue one of the mutations that causes LCA.

Leber Congenital Amaurosis is a genetic disorder that primarily affects the retina leading to visual impairment beginning in infancy. Patients suffering from this disorder can also have nystagmus, photophobia, and slow pupillary reactions. Visual impairment begins in early childhood, and progressively deteriorates ultimately resulting in vision loss at around thirty to forty years old. This disorder is inherited primarily in an autosomal recessive pattern. Mutations in at least 14 genes have been identified, with the most common being CEP290, CRB1, GUCY2D, and RPE65. You may have read or listened to our recent discussion on the use of gene therapy for RPE65 (LINK TO PODCAST AND BLOG). The article discussed in Episode 147 focused on CEP290, which is a gene that plays an important role in the development of centrosomes and cilia. The most common mutation in CEP290 that leads to LCA causes a splicing error in pre-mRNA. The mutation, which changes an adenine to a guanine, occurs within one of the introns of CEP290. This creates a new splice-donor site and a new exon (Exon 10) is aberrantly inserted into the final mRNA. This new exon carries a premature stop codon that, when translated, results in a truncated CEP290 protein that no longer functions as the original protein.

An antisense oligonucleotide (ASO) is a short (generally 13-25 nucleotides) single-stranded DNA molecule that can hybridize to a unique target sequence in a cell. The first generation of ASOs were designed to target mRNA and thereby knockdown the transcript via endonuclease-mediated degradation. These agents had the disadvantage of fast turnover, which prohibited them from achieving intracellular concentrations sufficient to suppress their target. New ASOs have been developed with modified backbones that function through different mechanisms like preventing ribosome recruiting to inhibit translation, or sterically blocking splicing factors to alter pre-mRNA splicing.

The ASO (QR-110) used in the paper discussed in Episode 147 works by modifying splicing in a slightly different manner than described above. QR-110 binds to the CEP290 pre-mRNA at the intron that contains the mutation. This binding prevents the creation of a new splice site, which causes the pre-mRNA to be processed as the wild-type pre-mRNA, without the inclusion of Exon 10. The protein translated from the mRNA is a wild-type CEP290 protein that can function normally in the development of centrosomes and cilia.

          -Amy Kloosterboer

 

Jayanth SridharComment
Lessons from our Pupils: A Reflection [Podcast Episode 146]

For the final episode in 2018 (LINK), Jay was joined for Journal Club by Drs. Daniel Chao and Ajay Kuriyan to discuss 3 recent publications: first the PIVOT trial comparing primary vitrectomy and pneumatic retinopexy, then the FLUID study looking at the effect of residual SRF in wet AMD, and finally a study out of Stanford University looking at timing of macula-off retinal detachment repair.

All three articles measured visual acuity as an outcome but did so utilizing different methods. The PIVOT and FLUID trials used ETDRS logMAR chart, and the macula-off retinal detachment repair study used a Snellen chart. Visual acuity, the ability to resolve spatial objects, is a common chosen endpoint of many clinical studies. In this post we will discuss how the methods used to measure it were developed and subsequently improved upon.

Image credit: https://en.wikipedia.org/wiki/Snellen_chart

Image credit: https://en.wikipedia.org/wiki/Snellen_chart

The Snellen chart is an iconic image nearly ubiquitous in the doctor’s office. It was created by Dutch ophthalmologist Dr. Herman Snellen in 1862. After his colleague, Dr. Franciscus Donders, had started to diagnose vision problems by asking patients to gaze at a card in a distant wall, he asked Dr. Snellen to help him develop a tool for this purpose. The original chart had shapes of various sizes including squares, circles, and plus signs. This proved to be challenging to use since the patients had to describe the symbols they saw. To simplify the process, letters eventually replaced symbols, giving way to the Snellen chart we use today. It is composed of eleven lines of capital letters that decrease in size as you progress down the rows. The patient is asked to stand 20 feet away from the chart, cover one eye, and read from it without using glasses. The top number seen on the ratio next to the letters represents the distance from the chart, and the bottom number is the distance at which a person with “normal” eyesight can read that same line. A person with normal visual acuity should be able to correctly read line 8 at a 20 feet distance from the image.

The logMAR chart (Logarithm of the Minimum Angle of Resolution) is another chart utilized to test visual acuity in patients and was developed in 1976 at the National Vision Research Institute of Australia. It was designed to be more accurate than the Snellen chart, and therefore, it is commonly used in research. The design allows for a logarithmic or proportional change in the letter size and spacing as it is read. It also improved on some of the previous concerns regarding the Snellen chart. The logMAR chart, unlike the Snellen, has the same number of letters in each line, each with the same degree of difficulty. To test visual acuity using the logMAR method, the patient is asked to stand 4 meters away (13.14 feet) from the chart. Each letter has a score of 0.02 log units, for a total change of 0.1 log units per line. A person with “normal” eyesight should receive a logMAR score of zero, while those with poor vision will receive a positive number.

-Amy Kloosterboer

Jayanth SridharComment