News

Surgical planning in virtual reality: a systematic review
Stacks Image 27662
We just published a review on surgical planning in VR in the Journal of Medical Imaging. In the systematic review we look into how virtual reality (VR) is transforming surgical planning. With VR physicians can assess patient-specific image data in 3D, enhancing surgical decision-making and spatial localization of pathologies. We found that benefits of VR become more evident. However, its application in surgical planning remains experimental, with a need for refined study designs, improved technical reporting, and enhanced VR software usability for effective clinical implementation. Authors of "Surgical planning in virtual reality: a systematic review" are Prof. Dr. Moritz Queisner and Karl Eisenträger.

Virtual reality (VR) technology has emerged as a promising tool for physicians, offering the ability to assess anatomical data in 3D with visuospatial interaction qualities. This systematic review aims to provide an up-to-date overview of the latest research on VR in the field of surgical planning.
A comprehensive literature search was conducted based on the preferred reporting items for systematic reviews and meta-analyses covering the period from April 1, 2021 to May 10, 2023. The review summarizes the current state of research in this field, identifying key findings, technologies, study designs, methods, and potential directions for future research. Results show that the application of VR for surgical planning is still in an experimental stage but is gradually advancing toward clinical use. The diverse study designs, methodologies, and varying reporting hinder a comprehensive analysis. Some findings lack statistical evidence and rely on subjective assumptions. To strengthen evaluation, future research should focus on refining study designs, improving technical reporting, defining visual and technical proficiency requirements, and enhancing VR software usability and design. Addressing these areas could pave the way for an effective implementation of VR in clinical settings.
Spacial computing in the OR
Stacks Image 23697

We tested the Apple Vision Pro in the operating theatre and it cuts an excellent figure: great images even in challenging lighting situations, stable interaction with the device - even though the limited peripheral vision and awareness inherent to video-based devices is a considerable downside in surgery.

We are looking forward to our first software solutions for improved hand-eye coordination in visceral surgery for this device too!
AI-based intra- and postoperative measurement from stereoimages
The publication "Redefining the Laparoscopic Spatial Sense: AI-based Intra- and Postoperative Measurement from Stereoimages“ has been accepted for the 38th AAAI Conference on Artificial Intelligence and is available via https://doi.org/10.48550/arXiv.2311.09744. The publication is the result of a fruitful collaboration between Karlsruhe Institute of Technology (KIT), Fraunhofer FIT, University of Bayreuth, and Charité – Universitätsmedizin Berlin. Authors are Leopold Müller, Patrick Hemmer, Moritz Queisner, Igor Sauer, Simeon Allmendinger, Johannes Jakubik, Michael Vössing, and Niklas Kühl.

A significant challenge in image-guided surgery is the accurate measurement task of relevant structures such as vessel segments, resection margins, or bowel lengths. While this task is an essential component of many surgeries, it involves substantial human effort and is prone to inaccuracies. In this paper, we develop a novel human-AI-based method for laparoscopic measurements utilizing stereo vision that has been guided by practicing surgeons. Based on a holistic qualitative requirements analysis, this work proposes a comprehensive measurement method, which comprises state-of-the-art machine learning architectures, such as RAFT-Stereo and YOLOv8. The developed method is assessed in various realistic experimental evaluation environments. Our results outline the potential of our method achieving high accuracies in distance measurements with errors below 1 mm. Furthermore, on-surface measurements demonstrate robustness when applied in challenging environments with textureless regions. Overall, by addressing the inherent challenges of image-guided surgery, we lay the foundation for a more robust and accurate solution for intra- and postoperative measurements, enabling more precise, safe, and efficient surgical procedures.

Stacks Image 23746
science x media Tandem Program: "From Slices to Spaces"
Prof. Dr. Moritz Queisner and Frédéric Eyl (Designer and Managing Director of TheGreenEyl) successfully applied to the Stiftung Charité for funding as a "science x media tandem".
The science x media tandems are the first programme in the new funding priority "Open Life Science". With this funding priority, the Charité Foundation is working to make the life sciences in Berlin more comprehensible and accessible to a broader public and to strengthen the trustworthiness of medical professionals.

Under the title "From Slices to Spaces", the tandem of Moritz Queisner and Frédéric Eyl is implementing a science parcours in which spatially complex research data from surgery and biomedicine will be made multisensually accessible to a broad audience through new visualization techniques. Building on research work on new imaging techniques by Moritz Queisner, they employ Extended Reality techniques. Due to their unique ability to link digital objects with the real environment of the viewers, the 4D images they generate are particularly suited for representing and conveying spatial information.

This is where the tandem's project comes in: 4D images are not only interesting for researchers to understand complex research data but can also provide laypeople with a less presupposing insight into research data and processes. Frédéric Eyl's media expertise will be used to make the specific visual knowledge from research comprehensible and experiential for non-experts. The science parcours is intended to integrate as a digital extension into the architecture of the new research building, "Der Simulierte Mensch", located on the premises of Charité. The parcours will include the facade, the inter-floor airspace, and the central glass surfaces within the building as its stations. By enabling users to explore 4D research data within the architecture and investigate it using their own smartphones in an AR application, concrete practices and deployment locations of new image-based technologies become experiential and comprehensible. This project not only enhances the perception of Charité and the scientific location of Berlin but also opens up places of knowledge creation to the public, making practices and techniques of life sciences more visible.


Stacks Image 23814
New DFG project "4D Imaging"
Stacks Image 23862
The DFG Schwerpunktprogramm „Das Digitale Bild“ (SPP 2172) funds the new project “4D Imaging: From Image Theory to Imaging Practice” (2023-2026). Principal investigators are Prof. Dr. Kathrin Friedrich (Universität Bonn) and Prof. Dr. Moritz Queisner.

The term 4D imaging refers to a new form of digital visuality in which image, action and space are inextricably interwoven. 4D technologies capture, process and transmit information about physical space and make it computable in real time. Changes due to movements and actions become calculable in real time, making 4D images particularly important in aesthetic and operational contexts where they reconceptualize various forms of human-computer interaction. The 4D Imaging project responds to the growing need in medicine to understand, use, and design these complex imaging techniques. It transfers critical reflexive knowledge from research into clinical practices to enable surgeons to use and apply 4D Imaging techniques. Especially in surgical planning, 4D Imaging techniques may improve the understanding and accessibility of spatially complex anatomical structures. To this end, the project is developing approaches to how 4D imaging can complement and transform established topographic ("2D") imaging practices.

Stacks Image 23864
Work with us | PhD position

We are hiring: 3-year #PhD position @Charité – Universitätsmedizin Berlin.
  • Join our interdisciplinary team for a PhD on new #imaging technologies at the intersection of digital health, surgery and biomedicine
  • Explore new ways to understand and/or visualize anatomical structures in #4D using extended reality #XR #digitaltransformation
  • Connect theory and practice in an interdisciplinary research group
  • Open call: open to all disciplines! Yes, that’s right – design, computer science, computer visualistics, digital health, psychology, media studies, workplace studies, game design…
  • What counts is a convincing idea for your doctoral project in the field of "4D imaging“

Sounds interesting? Apply now or reach out to Moritz Queisner (moritz.queisner@charite.de) if you have any questions.

More information:
German: https://karriere.charite.de/stellenangebote/detail/wissenschaftliche-mitarbeiterin-wissenschaftlicher-mitarbeiter-dwm-technologietransfer-chirurgie-dm27222a
English: https://karriere.charite.de/stellenangebote/detail/scientific-researcher-phd-position-dfm-dm27222b

BMBF funds KIARA
With the programme "AI-based assistance systems for process-accompanying health applications", the Federal Ministry of Education and Research (BMBF) is funding innovative research and development work on interactive assistance systems that support processes in clinical health care using artificial intelligence methods.

Together with the partners Gebrüder Martin GmbH & Co. KG, Tuttlingen, HFC Human-Factors-Consult GmbH, Berlin and the Fraunhofer Institute for Telecommunications Heinrich-Hertz-Institut (HHI), Berlin, we successfully applied with the project "AI-based recording of work processes in the operating theatre for the automated compilation of the operating theatre report" (KIARA).


Stacks Image 23918


Operating theatre reports document all relevant information during surgical interventions. They serve to ensure therapeutic safety and accountability and as proof of performance. The preparation of the OR report is time-consuming and ties up valuable working time – time that is then not available for the treatment of patients.

In the KIARA project, we are working on a system that automatically drafts operating theatre reports. The KIARA system is intended to relieve medical staff: it documents operating theatre activities and creates a draft of the report, which then only needs to be checked, completed and approved. The system works via cameras integrated into operating theatre lamps. Their image data is then analysed with the help of artificial intelligence to recognise and record objects, people and all operating theatre activities. The ambitious system is to be developed and tested in a user-centred manner for procedures in the abdominal cavity and in oral and maxillofacial surgery.

KIARA is intended to continuously learn through human feedback and to simplify clinical processes for the benefit of medical staff by automating the creation of operating theatre reports. The system can also be applied to other operating theatre areas in the future.

The project has a financial volume of € 2.16 million.
The kick-off meeting took place on 16.09.2022 at the Charité.
„Si-M-Day“ | November 24th, 2022
Stacks Image 23928
Join us – at our online networking event.
We, the Si-M spokespersons and coordinators, are pleased to invite you to our first symposium „Si-M-Day“ on 24th November from 9 to 14 h – online.
It is dedicated to networking and initiation of projects between investigators of both partner institutions.
Click
here to register until November 18th (abstract submission deadline October 17th).
Active Matter in Robotic-Assisted Surgery
Stacks Image 23955
Tuesday, 12.09.2022 | Cluster Retreat | Matters of Activity

2:30 – 2:45 pm Welcome & Intro
2:45 – 4:15 pm Panel 1
Rasa Weber Product Design (20 Minutes)
Felix Rasehorn Product Design (20 Minutes)
Binru Yang Engineering (20 Minutes)
Panel Discussion (30 Minutes)

4:15 – 4:45 pm Coffee Break
4:45 – 6:15 pm Panel 2
Jakub Rondomanski Mathematics (20 Minutes)
Babette Werner Art and Visual History (20 Minutes)
Anna Schäffner & Dominic Eger Domingos Product Design (20 Minutes)
Panel Discussion (30 Minutes)

6:15–7:30 pm Opening Exhibition und Aperitivo
VolumetricOR | Surgical Innovation
Stacks Image 23987
Our paper "VolumetricOR: A new Approach to Simulate Surgical Interventions in Virtual Reality for Training and Education" is available in the latest issue of Surgical Innovation.

Surgical training is primarily carried out through observation during assistance or on-site classes, by watching videos as well as by different formats of simulation. The simulation of physical presence in the operating theatre in virtual reality might complement these necessary experiences. A prerequisite is a new education concept for virtual classes that communicates the unique workflows and decision-making paths of surgical health professions (i.e. surgeons, anesthesiologists, and surgical assistants) in an authentic and immersive way. For this project, media scientists, designers and surgeons worked together to develop the foundations for new ways of conveying knowledge using virtual reality in surgery.
A technical workflow to record and present volumetric videos of surgical interventions in a photorealistic virtual operating room was developed. Situated in the virtual reality demonstrator called VolumetricOR, users can experience and navigate through surgical workflows as if they are physically present . The concept is compared with traditional video-based formats of digital simulation in surgical training.

VolumetricOR let trainees experience surgical action and workflows a) three-dimensionally, b) from any perspective and c) in real scale. This improves the linking of theoretical expertise and practical application of knowledge and shifts the learning experience from observation to participation.
Discussion: Volumetric training environments allow trainees to acquire procedural knowledge before going to the operating room and could improve the efficiency and quality of the learning and training process for professional staff by communicating techniques and workflows when the possibilities of training on-site are limited.

Authors are Moritz Queisner, Michael Pogorzhelskiy, Christopher Remde, Johann Pratschke, and Igor M. Sauer.
BMBF grant – GreifbAR
Stacks Image 24038
The Federal Ministry of Education and Research (BMBF) funds the project "Tangible reality - skilful interaction of user hands and fingers with real tools in mixed reality worlds (GreifbAR)" – a cooperation of the Augmented Vision group of the DFKI (Prof. Dr. Didier Stricker), the Department of Psychology and Human-Machine Interaction of the University of Passau (Prof. Dr. Susanne Mayr), the company NMY Mixed Reality Communication (Christoph Lenk), and the Experimental Surgery of Charité – Universitätsmedizin Berlin (Prof. Dr. Igor M. Sauer).

The goal of the GreifbAR project is to make extended reality (XR) worlds, including virtual (VR) and mixed reality (MR), tangible and graspable by allowing users to interact with real and virtual objects with their bare hands. Hand accuracy and dexterity is paramount for performing precise tasks in many fields, but the capture of hand-object interaction in current XR systems is woefully inadequate. Current systems rely on hand-held controllers or capture devices that are limited to hand gestures without contact with real objects. GreifbAR solves this limitation by proposing a sensing system that detects both the full hand grip including hand surface and object pose when users interact with real objects or tools. This sensing system will be integrated into a mixed reality training simulator.

Competent handling of instruments and suture material is the basis of every surgical activity. The main instruments used in surgery are in the hands of the surgical staff. Their work is characterised by the targeted use of a large number of instruments that have to be operated and controlled in different ways. Until now, surgical knotting techniques have been learned by means of personal instruction by experienced surgeons, blackboard images and video-based tutorials. A training and teaching concept based on the acquisition of finger movement does not yet exist in surgical education and training. Learning surgical account techniques through participant observation and direct instruction by experienced surgeons is cost-intensive and hardly scalable. This type of training is increasingly reaching its limits in daily clinical practice, which can be attributed in particular to the changed economic, social and regulatory conditions in surgical practice. Students and trainees as well as specialist staff in further training are therefore faced with the problem of applying and practising acquired theoretical knowledge in a practice-oriented manner. Text- and image-based media allow scalable theoretical knowledge acquisition independent of time and place. However, gestures and work steps can only be passively observed and subsequently imitated. Moreover, the learning success cannot be quantitatively measured and verified.

The aim of the Charité's sub-project is therefore to develop a surgical application scenario for Mixed/Augmented Reality (MR/AR) for the spatial guidance and verifying recording of complex fine motor finger movements for the creation of surgical knots, the practical implementation and technical testing of the developed concept within the framework of a demonstrator, and the evaluation of the usability of the system for use in a clinical context.
ADBoard | Therapeutic Assist and Decision Algorithms for Hepatobiliary Tumor Boards
Stacks Image 24071
The Gemeinsamer Bundesausschuss (Federal Joint Committee, G-BA) will fund a new collaborative project of the Charité's Dept. of Surgery and the Deutsches Forschungszentrum für Künstliche Intelligenz (German Research Center for Artificial Intelligence, DFKI), Speech and Language Technology.

The aim of the project Therapeutic Assist and Decision Algorithms for Hepatobiliary Tumor Boards (ADBoard) is the validation and evaluation of decision support systems based on linguistic and semantic methods of artificial intelligence (AI) for interdisciplinary tumour conferences in the care of tumour patients. Natural language processing (NLP) and machine learning (ML) will provide the technical basis for data extraction, data filtration and decision support for the automated generation of therapy recommendations. Interdisciplinary tumour board conferences are medical conferences, usually held on a weekly basis, which are required by the respective medical societies to determine a therapy or monitoring plan for patients with malignant diseases. Participants are representatives of the required medical disciplines who, taking into account the tumour characteristics and the general health of the patient, review the treatment options and make a therapy recommendation.

The Gemeinsamer Budesausschuss (Federal Joint Committee, G-BA) has the mandate to promote new forms of health care that go beyond the current standard provision of statutory health insurance, and health care research projects that are aimed at gaining knowledge to improve existing health care.

ADBoard is a collaboration of Priv.-Doz. Dr. Felix Krenzien, Priv.-Doz. Dr. Christian Benzing, Prof. Dr. Dominik Modest, Prof. Dr. Johann Pratschke (Charité – Universitätsmedizin Berlin) and Prof. Dr.-Ing. Sebastian Möller, Head of Research Department Speech and Language Technology, German Research Center for Artificial Intelligence.
Dr. Moritz Queisner
Stacks Image 24130
Dr. rer. medic. Moritz Queisner received his doctorate certificate today (magna cum laude)! This is in recognition of his work in the field of extended reality technology in visceral surgery. His thesis is entitled XR in surgery – spatial end embodied computing in digital surgery: technology, application, design.

CONGRATULATIONS !
CASSANDRA | Clinical ASSist AND aleRt Algorithms
Stacks Image 24154
The Innovationsausschuss beim Gemeinsamen Bundesausschuss (G-BA) is funding 33 new projects in healthcare research. A total of 186 project applications were received in response to the funding announcements of December 2019. Nine project proposals from the open topic area and 24 project proposals from the topic-specific area received a positive funding decision.

Our project CASSANDRA (Clinical ASSist AND aleRt Algorithms – Early detection of postoperative complications with machine learning algorithms) is one of the projects funded for three years.

The aim of the project is to evaluate Machine Learning (ML) in the detection of postoperative complications after major abdominal surgery. By means of digital records and ML-driven analysis of perioperative risk factors, postoperative parameters as well as telemedical vital parameter monitoring, it is to be examined whether complications requiring treatment – in particular infections of the abdominal cavity after liver, pancreas, stomach and intestinal surgery – can be automatically detected and predicted, in order to develop the basis for an autonomous real-time monitoring system on normal wards.
CASSANDRA is a collaboration of Axel Winter, Dr. Max Maurer, Prof. Dr. Igor M. Sauer (Charité – Universitätsmedizin Berlin) and Prof. Dr. Bert Arnrich, Head of the Chair, Professor for Digital Health - Connected Healthcare, Hasso Plattner Institut.
DICOM_XR | XR4ALL 2nd Open Call: Project Selected for Phase 1
Stacks Image 24159
XR4ALL is an initiative by the European Commission to strengthen the European XR industry.

After 140 applications, 18 projects have been selected for Phase 1 of the 2nd Cut-off date of the XR4ALL Open Call. In this phase, projects need to expand upon and validate their concept from a business and a technical perspective during two months.
Based on an evaluation at the end of the first phase, only the best-rated projects will be admitted to Phase 2 and therefore be able to develop the proposed solution.

Our project DICOM_XR (PI: Christoph Rüger) is one of them (and one of three from Germany)!

One of the most common use-cases for XR in medicine is the visualization of medical imaging data like computed tomography (CT) scans. The well-established standard for storing and transferring such data is DICOM (Digital Imaging and Communications in Medicine). It is used in all major hospitals in the European Union – XR applications that involve medical images need to be built upon this standard. Existing open-source DICOM frameworks offer data interoperability and are compatible with 3D engines, like Unity. However, while DICOM is well-established and very feature rich, it is also a complex standard to work with as a developer. In addition to data interoperability provided by DICOM, most medical XR applications also require: 1) Data transfer from a machine with access to the hospital’s image network to mobile XR devices such as HMDs, 2) performant visualization, particularly for stereographic displays, and 3) view manipulation with 3D input (e.g. hand tracking) instead of mouse input. These requirements are, at best, additional workloads for technically skilled teams and, at worst, insurmountable hurdles for projects lacking programmers.
DICOM_XR is a framework aiming to solve all three of these requirements: data transfer, performant visualization and utilization of three-dimensional input. Building upon an existing open-source DICOM solution, DICOM_XR will offer a ‘plug and play’ solution for XR developers. It will significantly decrease technical hurdles for e.g. medical studies evaluating XR, which are still sorely needed. It can also streamline the development of commercial XR applications: Medical open-source projects such as SlicerIGT have been successfully used as a foundation for certified medical products. In short, DICOM_XR will allow medical XR developers to focus on features that their users want, rather than technical infrastructure.
Ultrasound in augmented reality: a mixed-methods evaluation of head-mounted displays in image-guided interventions
Stacks Image 24210
The International Journal of Computer Assisted Radiology and Surgery accepted Christoph Rüger's paper on "Ultrasound in augmented reality: a mixed-methods evaluation of head-mounted displays in image-guided interventions" for publication.

Augmented reality (AR) and head-mounted displays (HMD) are current subjects of investigation in medical practice. A commonly proposed use-case of AR-HMDs is to display data in image-guided interventions. Although technical feasibility has been thoroughly shown, effects of AR-HMDs on interventions are not yet well researched, hampering clinical applicability. Therefore, the goal of this study is to better understand the benefits and limitations of this technology in ultrasound-guided interventions.
We used an AR-HMD system (based on Hololens, Microsoft Corp.) which overlays live ultrasound images spatially correctly at the location of the ultrasound transducer. We chose ultrasound-guided needle placements as a representative task for image-guided interventions. To examine the effects of the AR-HMD, we used mixed methods and conducted two studies in a lab setting: (1) in an experimental study, we asked participants to place needles into a training model and evaluated task duration and accuracy with the AR- HMD as compared to the standard procedure without visual overlay and (2) in a qualitative study, we analysed the user experience with AR-HMD using think-aloud protocols during ultrasound examinations and semi-structured interviews after the task.
Participants (n=20) placed needles more accurately (mean error of 7.4 mm vs. 4.9 mm, p=0.022) but not significantly faster (mean task duration of 74.4 s vs. 66.4 s, p=0.211) with the AR-HMD. All participants in the qualitative study (n=6) reported limitations of and unfamiliarity with the AR-HMD, yet all but one also clearly noted benefits and/or that they would like to test the technology in practice.
We present additional, though still preliminary, evidence that AR-HMDs provide benefits in image-guided procedures. Our data also contribute insights into potential causes underlying the benefits, such as improved spatial perception. Still, more comprehensive studies are needed to ascertain benefits for clinical applications and to clarify underlying mechanisms.

Authors are Christoph Rüger, Markus A. Feufel, Simon Moosburner, Christopher Özbek, Johann Pratschke, and Igor M. Sauer.
Brigitta Globke: Digital Clinician Scientist
Stacks Image 24215
Dr. Brigitta Globke successfully applied for participation in the BIH Charite Digital Clinician Scientist Program.

The aim of the project is the development and evaluation of an augmented reality assist system for intraoperative photoplethysmographic control of perfusion. The project is carried out in collaboration with Benjamin Kossack, Fraunhofer | Heinrich Hertz Institute Computer Vision and Graphics.

Charité and BIH are jointly organizing the new "Digital Clinician Scientist Program" (D-CSP). The program is primarily aimed at physicians who are already working on innovative research projects to meet the technological challenges of data-driven medicine during their specialist training. The German Research Foundation (DFG) is funding the project for an initial period of three years.

The BIH Charité Digital Clinician Scientist Program will provide a new career path for the creators of digital change in medicine and will expand the successful Germany-wide model of the BIH Charité Clinician Scientist Program. In addition to the three-year individual funding, which is based on protected time for research, the focus is on modules for the acquisition of scientific skills (Big Data, bioinformatics or artificial intelligence) as well as mandatory mentoring. For the new program, various experts* from the Charité, the BIH, the Max Delbrück Center for Molecular Medicine (MDC), the Berlin Institute for Medical Systems Biology (BIMSB), the Einstein Center for Digital Future, and the Bernstein Center for Computational Neuroscience will be involved in the design of the D-CSP and in the recruitment and supervision of program participants.
Extended reality technologies for support of surgical workflows
Stacks Image 24232
Current developments in the field of extended reality (XR) could prove useful in the optimization of surgical workflows, time effectiveness and postoperative outcome. Although still primarily a subject of research, the state of XR technologies is rapidly improving and approaching feasibility for a broad clinical application. Surgical fields of application of XR technologies are currently primarily training, preoperative planning and intraoperative assistance. For all three areas, products already exist (some clinically approved) and technical feasibility studies have been conducted. In teaching, the use of XR can already be assessed as fundamentally practical and meaningful but still needs to be evaluated in large multicenter studies. In preoperative planning XR can also offer advantages, although technical limitations often impede routine use; however, for cases of intraoperative use informative evaluation studies are mostly lacking, so that an assessment is not yet possible in a meaningful way. Furthermore, there is a lack of assessments regarding cost-effectiveness in all three areas. The XR technologies enable proven advantages in surgical workflows despite the lack of high-quality evaluation with respect to the practical and clinical use of XR. New concepts for effective interaction with XR media also need to be developed. In the future, further research progress and technical developments in the field can be expected.

Authors are Christoph Rüger, Simon Moosburner and Igor M. Sauer (Chirurg 2020; 91(7): 544-552).
Junior Professorship for Digital Surgery and Interdisciplinary Technology Research
Stacks Image 24237
The Department of Surgery of the Charité (Director: Prof. Dr. Johann Pratschke) at the Charité Center 8 (CharitéCenter for Surgery) invites applications for the position of the Junior Professorship for Digital Surgery and interdisciplinary Technology Research (Salary Group: W1 BBesG-ÜfBE, non-tenured) with the reference number: Prof. 546/2020.

The initial appointment is for three years with the optional extension for another three years follow-ing successful evaluation. It is aimed to turn the Junior Professorship into a W2-Professorship (Salary Group: W2 BBesG-ÜfBE) after six years.The successful candidate has to fulfill the appointment requirements in accordance with § 102a of the Berlin Higher Education Act (Berliner Hochschulgesetz, Gem. § 102a BerlHG) and needs to credibly demonstrate through his/her previous scientific work that he/she is able to fulfill the expectations of the junior professorship.

One of the tasks of this Junior Professorship is the appropriate representation of the research area mentioned above. Within the framework of the Cluster of Excellence Matters of Activity – Image Space Material, he/she is expected to evaluate, accompany and advance the digital transformation in surgery and related disciplines as well as expand the repertoire of methods and initiate innovations. In cooperation with the research areas Cutting and Material Form Function of the Cluster of Excellence, new surgical cutting techniques are to be investigated and developed. It is planned to be linked to the currently being established institutions, The Simulated Human Being (Si-M) and the Berlin Simulation and Training Centre (BeST). In addition to the tasks mentioned, the following three fields of activity are to be covered:

Interdisciplinary Knowledge Transfer

  • Implementation of new applications from areas such as deep learning, extended reality (mixed and virtual reality) or robotics in surgical practice requires an intensification of interdisciplinary cooperation
  • Continuous exchange between industry and practice as well as with adjacent disciplines (e.g. Radiology)
  • Integration of a growing number of applications and competencies from areas outside established medical technology, e.g.game design, computer science or human factor studies

Technology Assessment

  • Sustainable implementation of digital technologies through opportunity and risk assessment
  • Advising the Department of Surgery on investment decisions through appropriate risk and media competency

Innovation

  • Identification of concrete application locations and practices of digital surgery within the clinic and experimental research (e.g. use of technologies in the context of biomedical research approaches to organ replacement as well as oncological models) for future Living Labs and to demonstrate these to the public
  • Integration of users, research projects and start-ups also outside the Clinic

The successful candidate will be engaged in teaching activities of the medical education curriculum at Charité, supervise Master and Doctoral candidates, and participate in academic self-organization. In addition, the candidate should present concepts for a good supervision of doctoral students as well as for the integration of his/her research activities into the teaching of the Charité. Appointment requirements are governed by article 102a of the Berlin Higher Education Act (Berliner Hochschulgesetz:§ 102a BerlHG). Completed university degree in Natural Sciences, Humanities and/or Life Sciences or any other related field of Medicine or non-medicine is required. In addition, a Doctorate (Ph.D and/or M.D.) and significant post-doctoral experience are required. Basic medical knowledge is desired.

The Charité is an equal opportunity employer committed to excellence through diversity. As women are under-represented in academics, we explicitly encourage women to send in their application. Women will be given preference over equally qualified men (within the framework of the legal possibilities). We value diversity and therefore welcome all applications – regardless of gender, nationality, social background, religion or age. Equally qualified applicants with disabilities will be given preference.

Written applications according to the format specified on https://career.charite.de/am/calls/application_notes.pdf should be submittedby June 19th, 2020 under https://career.charite.de. For further questions on details, please contact Prof. Dr. Igor Maximilian Sauer.
Vascular anatomy of the juvenile Göttingen minipig
Stacks Image 24751
Lab Animal accepted our „Computed tomography-based survey of the vascular anatomy of the juvenile Göttingen minipig“ for publication.

Over the past 50 years, image-guided procedures have been established for a wide range of applications. The development and clinical translation of new treatment regimens necessitate the availability of suitable animal models. The juvenile Göttingen minipig presents a favourable profile as a model for human infants. However, no information can be found regarding the vascular system of juvenile minipigs in the literature. Such information is imperative for planning the accessibility of target structures by catheterization.

We present here a complete mapping of the arterial system of the juvenile minipig based on contrast-enhanced computed tomography. Four female animals weighing 6.13 ± 0.72 kg were used for the analyses. Imaging was performed under anaesthesia, and the measurement of the vascular structures was performed independently by four investigators. Our dataset forms a basis for future interventional studies in juvenile minipigs, and enables planning and refinement of future experiments according to the 3R (replacement, reduction and refinement) principles of animal research.


Authors are J. Siefert, K.H. Hillebrandt, M. Kluge, D. Geisel, P. Podrabsky, T. Denecke, M. Nösser, J. Gassner, A. Reutzel-Selke, B. Strücker, M.H. Morgul, S. Guel-Klein, J.K. Unger, A. Reske, J. Pratschke, I.M. Sauer, and N. Raschzok.
Read More
 Page 1 / 1 
Stacks Image 27655
Our manuscript "Depletion of donor dendritic cells ameliorates immunogenicity of both skin and hind limb transplants" has been accepted for publication in Frontiers in Immunology, section Alloimmunity and Transplantation. Authors are Muhammad Imtiaz Ashraf, Joerg Mengwasser, Anja Reutzel-Selke, Dietrich Polenz, Kirsten Führer, Steffen Lippert, Peter Tang, Edward Michaelis, Rusan Catar, Johann Pratschke, Christian Witzel, Igor M. Sauer, Stefan G. Tullius, and Barbara Kern.

Acute cellular rejection remains a significant obstacle affecting successful outcomes of organ transplantation including vascularized composite tissue allografts (VCA). Donor antigen presenting cells (APC), particularly dendritic cells (DC), orchestrate early alloimmune responses by activating recipient effector T cells. Employing a targeted approach, we investigated the impact of donor-derived conventional DC (cDC) and APC on the immunogenicity of skin and skin-containing VCA grafts, using mouse models of skin and hind limb transplantation.
By post-transplantation day 6, skin grafts demonstrated severe rejections, characterized by predominance of recipient CD4 T cells. In contrast, hind limb grafts showed moderate rejection, primarily infiltrated by CD8 T cells. While donor depletion of cDC and APC reduced frequencies, maturation, and activation of DC in all analysed tissues of skin transplant recipients, reduction in DC activities was only observed in the spleen of hind limb recipients. Donor cDC and APC depletion did not impact all lymphocyte compartments but significantly affected CD8 T cells and activated CD4 T in lymph nodes of skin recipients. Moreover, both donor APC and cDC depletion attenuated the Th17 immune response, evident by significantly reduced Th17 (CD4+IL-17+) cells in the spleen of skin recipients and reduced levels of IL-17E and lymphotoxin-α in the serum samples of both skin and hind limb recipients. In conclusion, our findings underscore the highly immunogenic nature of skin component in VCA. The depletion of donor APC and cDC mitigates the immunogenicity of skin grafts while exerting minimal impact on VCA.

Archive


Categories

Year

This website or its third-party tools use cookies, which are necessary to its functioning and required to achieve the purpose illustrated in the Disclaimer. By closing this banner, scrolling this page, clicking a link or continuing to browse otherwise, you agree to the use of cookies.