Augmented Reality-Cube-Assisted Navigation (ARcaN) system. Description of the Orthopractis Navigation System ONS method
Introduction.
Method
Orthopractis ® navigation system - http://www.orthopractis.com/ (Thessaloniki, Greece). This system uses a Reference cube placed on the patient during preoperative CT imaging.
Registration.
Patient CT generated DICOM images should be processed and images segmented and exported in to USDZ formant. According to user preference also relevant points of interest are registered and exported in TXT file format
All files are being calibrated according to Reference Cube Files and exported ready to be be dowloaded in a Head mounted Augmented reality device (Apple vision Pro).
Visualisation.
Once files are being dowloaded, patient 3D virtual images and marked structures as points of interest are to be visualised three-dimensionally in immersive space in Augmented Reality (AR) in Apple vision pro surgeons device. Reference cube previously placed on the patients body should be placed at the same spot over patient body. Reference cube in each surface carry a QR code markings and once the surgeon see the cube and Qrcode is recognised by the App patient USDz files and Points of Interests are depicted and placed over patient body in the exact anatomical position. App matches reference cube with virtual anatomical model as extracted from CT imaging and points of interest and overlaid these in immersive space over real patient body structure in Augmented Reality in view of surgeon wearing head mounted Apple Vision Pro device.
Surgical intervention
Overview of the workflow. Augmented Reality-Cube-Assisted Navigation (ARcaN) system
Imaging
The workflow starts with patient imaging (CT and / or MRI) while the reference cube (fig) should be firmly attached to patient body near anatomical region of interest.The position of reference should also be marked and imaging data (CT scan and or Magnetic Resonance Imaging (MRI) are collected.
Guidelines for healthcare professionals to position and attach the reference cube over patient skin. Technique and tips the following option are advised
Steri-drape method. Position cube over patient body and place a steri-drape over patient skin. Steri-drapes can carefully positioned and adhered to the patient's skin, ensuring that cube is not moved during surgical intervention.Their adhesive properties and design help to ensure the integrity of the sterile field is maintained while field of vision is not obscure and qr code are easily recognised by the camera.
B. Self adhesive velcro
Skin Preparation: Ensure that the skin is clean, dry, and free from any oils, lotions, or other substances that might interfere with adhesion. Clean the area thoroughly and allow it to dry before applyin and any application of adhesive materials should be done with caution to avoid potential skin irritation or damage. Self-adhesive Velcro is not specifically designed for direct application to the skin and may cause skin irritation, especially if used for an extended period or on sensitive skin.
C. MRI-compatible electrodiagnostic patches.
MRI-compatible patches have a secure adhesive backing that allows them to be comfortably affixed to the patient's skin. Reference cube specially designed MRI-compatible electrodiagnostic patches or electrodes are available. These patches are made of non-magnetic materials, such as carbon or plastic, and are specifically manufactured to be safe for use during MRI scan to avoid cause artifacts and interference in the MRI image.
Technique.
1. Clean the Skin Use a dry gauze pad to remove excess skin oils, skin cells and residue from the electrode sites. Never rub the skin until it is raw or bleeding.
NOTE: Prepare the electrode site with alcohol only if the skin is extremely greasy. If alcohol is used as a drying agent, always allow the skin to dry before placing the electrode patch on the skin.Peel off the backing from each electrode, being careful not to touch the adhesive side. Place each electrode firmly onto the prepared skin in the designated locations gently tug on each side of the cube ensure it is securely attach. This step helps prevent accidental detachment during imaging .According to manufactures 24-36 hours to maintain proper contact with the skin.
2. Snap Connector: The snap connector is the part of the back surface of the cube that attaches to the metal stud or snap on the electrode patch. Five snap connectors are available at the back surface of the cube.
Buckle Attachment: To attach the snap connector with the metal stud or snap on the electrode. slight firm pressure and push should be applied the the connector onto the metal stud or snap until it audibly clicks into place. The audible click provides confirmation that the snap connector is properly fastened to the electrode.
Snap Release: To detach or remove the lead wire from the electrode patch, you generally need to apply pressure to release the snap connector. This can be done by using your fingers or gently pressing on the side of the snap connector to disengage it from the electrode's snap. At times, some lead wires may have additional mechanisms or buttons for release.
(@orthopractis.com )
Collected Data are loaded to Data Management Center (DMC) where the segmentation and preregistration procedures starts.
Segmentation procedure
The 3D model is reconstructed through segmentation from DICOM images from a CT scanner or MRI ; the model is spatially bounded to reference cube and this is achieved by calculating the position of eight embedded transparent markers inside reference cube in known spatial configurations.
Pre Registration procedure
Preoperatively, the registration of anatomicals points needed for intraoperative navigation are preregistered and positioned over the 3D patient model. All data are processes and calibrated accordingly and the selected points of interest after personal conduct with the surgeon. Operation path can be planned and determined according to certain surgeon needed and customized offering personalisation and versatility according to patient specific needs.
By default the femoral head is the point of interest and depicted in virtual model as red sphere. All points of interest are added to 3D patient’s virtual model and send back from DMC as a file to the surgeon that should be loaded uploaded in the app. This is major advantage compared to traditional techniques in Computer assisted surgery mainly because intraoperative cumbersome registration of anatomical landmarks no longer is needed. Once 3D patient model is loaded with pre-register anatomical landmarks points, the surgical workflow continues without interruption by registration of points.
Operative setup
Reference cube should be again at the patient previously marked position during imaging to patient body near anatomical region of interest preferably and the previous exact position and should be firmly attach over drapes. Reference cube under the drapes for automatic registration should be clearly visible from all directions
Previous marking or Electroradiographic patches or negative adhesive Velcros should be remain after imaging an take advantage in order to find as accurate as possible the position of the reference cube as it was in imaging .Other wise in case that patient has removed all relative marking that could reveal the previous attached at patient skin position n two c/Arm correction is feasible
By taking two arbitrary pose of region of interest and the reference cube with Carm X-ray before operation and and taking pictures or the fluoroscopy screen the new position of reference cube is estimated and automatically correction of the procedure is performed
AI
We ntegrate machine learning models into our app Instruments are Find and tracked in real-world objects in visionOS using reference objects trained with Create ML recognizes instruments are preregistered that object and give continuously spatial information
Hand tract
Hand can be tracked as an object in the Apple Vision Pro. The Apple Vision Pro is equipped with advanced sensors capable of high-precision tracking, ensuring that utilizes a 27-point model over entire hand joints and fingers to accurately detect and precisely track continuously in realtime hand and finger movements. App in visionOS taken leverage or Apples machine learning algorithms creates HandAnchors that represent specific points on the hand the positions is are updated in real-time to reflect the position and orientation of the hands. These handancors bare tracked in real-time to enable interactions immersive space by augmenting the reality (AR) and virtual reality (VR).
Auto calibration of tools is achieved by highly sophisticated algorithms which allows the accurate to attach tools that are previously recognised as objects with neural engines. This makes every task precise and accurate placement of screws navigating a needle for biopsy etch Calibration is needed and auto calibration is activated by pressing
see video and images navigation an
Skin recognition
Instruments
Apple Vision Pro's object tracking is highly accurate due to its combination of advanced sensors(High-Resolution Cameras,LiDAR Sensors, Internal Measurement Units -gyroscopes and accelerometers), machine learning, and robust software tools.
Position and orientation of instruments in a three-dimensional space are precisely traced in immersive environment of apple vision pro by leveraging spatial object recognition using machine learning capabilities.
Instrument models have been trained using Apple's Create ML tool, which allows for the generation of spatial object tracking models. These models can track objects from various angles and under different conditions offering , tracking accuracy.
Technique
reference object model aligned perfectly with the detected physical object, and even with clutter around the object, the tracking mechanism was able to find the physical objects without problems.
Method
Once hand is in the Surgeon optical field in Apple Vision Pro cylinders appear corresponding the axis of tool by holding the respective tool and the tool is recognized a pair of of green spheres appears by pressing the auto calibration tool the start button the tool is continuously caribtated to the hand anchor and the axis is perpendicular r and inside the cylindrical hole of the tool these can be seen in the video how briantly remains always in the holder tool tube hole Button allow to decrease or increase the lengths of cylindrical tool according to surgeon needs the combination of hand recognition and tool recognition features allowing seamlessly work of surgeon which can be seen like to be clued in surgeons hand
Needle holder tool
Impactor tool is versatile can be attach with magnets in
Drill ,Screw,saw bone are using the same recognition marker as it is shown in picture tool by placing at the tip of the hand the the point is registered and the tool is calibrated once auto calibration icon the
This includes key joints and tips of fingers, enabling detailed gesture recognition and interaction.
key feature related to hand tracking in the Apple Vision Pro:
The sensors and cameras
Surgeons use the Vision Pro to overlay in real-time imaging data (like CT or MRI scans) onto the patient's body during surgery. This provide a 3D view of the patient’s anatomy directly in the surgeon’s field of vision. In case improving precisionDynamic Overlay: As the real-time detection occurs, dynamically overlay the CT segmentation data onto the live feed or processed images, maintaining alignment between the detected skin areas and the CT-based segments
by Combining ML recognition with AR in Apple Vision Pro track surgical instruments are tracked recognised and targeting axis are overlaid in enhanced view for surgeons, allowing accurate surgical procedures overlaying axis of augmented immersive thaand instrument and combing names or usage instructions directly onto the instruments in their field of vision.
The system leverages real-time i and sophisticated software to guide interventional procedures.
We Utilize advanced techniques where a pre-trained model is fine-tuned on the surgical instruments dataset.
Instrument are integrated with hand surgeon , providing enhanced precision and control by recognizing and manipulating instruments which automatically calibrated to current hand position.
Recognizing both the hand and the instrument allows the system to understand not only what tool is being used but also how it is being held and manipulated. This context is crucial for tasks requiring high precision, such as ensuring that an instrument is directed with the right angle to the desired target or the safe depth of insertion allowing surgeon to correctly be navigated by assisting him with real time measurements in front of his eyes by seeing in his immersive environment. By visualizing the real patient anatomy overlaid over surgical field during surgery helps to identify anatomic landmarks that might have to be avoided during procedure. Apple vision pro offers Real-Time Guidance of instruments by depicting in surgeons field all necessary measurement. The system can provide real-time feedback or guidance based on the position and movement of the surgeon’s hands relative to the instruments and anatomical landmarks avoiding the a risk of inadvertently hitting precious anatomical elements during procedure.
By combining hand and instrument recognition with AR, our system could overlay relevant information directly onto the surgeon's field of view. For instance, it could display the projected path of the needle, the target area, or alerts if the hand is off center and how far in real distance or if the needle deviates from the planned path by measuring angle between tool and.
The system aloowsto visualize the position and orientation of both the hand and instrument relative to the patient’s anatomy, providing a 3D view that enhances spatial understanding and precision.
hand and instrument recognition components are integrated seamlessly in our App by leveraging the recognition of both entities by Neuronal engines and joint all to unique combining
Improved Workflow Efficiency:
Contextual Assistance: Depending on the recognized instrument and hand position, the system provide specific contextual information depicted over his hand -see relevant video-, such as the expected depth, trajectory, or angle for needs of the procedure, streamlining the process and reducing cognitive load on the surgeon.
The system automatically recognize when the surgeon switches instruments and adjust its guidance accordingly. This is particularly useful in procedures requiring multiple instruments, such as biopsy needles and guidewires, improving the workflow's efficiency.
projection of ct
Dynamic Overlay: As the real-time detection occurs, dynamically overlay the CT segmentation data onto the live feed or processed images, maintaining alignment between the detected skin areas and the CT-based segments
Safety during procedure
Our system automatically detect potential errors, such as improper hand positioning, or incorrect instrument selection, and alert the surgeon immediately. This proactive approach enhances patient safety by reducing the likelihood of procedural errors.
If the system detects that the hand-instrument coordination deviates significantly from the expected norm, it could trigger a fail-safe mechanism, pausing the procedure or alerting the medical team to reassess the situation.
Hand and instrument recognition is used i.e to track the needle's position and depth more accurately without continuous imaging. This reduces the need for repeated CT scans during the procedure, thereby minimizing radiation exposure.-see video-
Gesture recognition could allow the operator to control the system without touching the device or console. For example, a simple hand gesture can be used to adjust settings, or capture an image, further reducing the need for additional scans. For instance, if the surgeon's hand is slightly off the ideal trajectory, the system could provide real-time suggestions to adjust the angle or approach, ensuring that the needle is inserted at the optimal path.
Our sophisticated applications ensure better outcomes in medical procedures due to enhancement in surgical precision. AR overlays could guide surgeons through procedures, offering real-time data and visualization, which enhances precision and reduces the risk of errors.
Visualization for Image-Guided Surgery
Data Collection and Analysis for Skill Assessment
Performance Metrics: By analyzing the coordination between hand movements and instrument usage, the system can generate detailed metrics on a surgeon's performance, such as the speed, accuracy, and efficiency of their actions. This can be used for skill assessment and certification.
Learning from Expert Techniques: The system can be trained on the hand and instrument coordination of expert surgeons, learning patterns and techniques that can then be taught to trainees through guided exercises
tecnique to project the ct and mri over patient body
Visualization: You can visualize the overlaid CT segmentation on the real-world image, showing the relationship between the detected skin area and the corresponding CT scan data.
Dynamic Overlay: As the real-time detection occurs, dynamically overlay the CT segmentation data onto the live feed or processed images, maintaining alignment between the detected skin areas and the CT-based segments
By leveraging the object detection capabilities of models like YOLO, SSD, or Faster R-CNN, combined with CT scan segmentation data, you can effectively recognize human skin in real-world images and overlay corresponding CT segmentation data. This approach is particularly powerful in medical applications where understanding the spatial relationship between real-world images and CT scan data is critical. The combination of Core ML for real-time processing and advanced neural networks for object and segmentation detection makes this a feasible and highly impactful solution.
Each surface of the reference cube has a different QR marker. Once QR code markers are recognized by App a colourful sphere appears in the center of the QR marker. Postion of QR markers is known in relation to the position of the embedded reference markers in the cube. By taking leverage of the reference cube App calculates where and position of extracted 3D models from CT or MRI imaging can be loaded projected over patient body and visualised in immersive environment in Augmented Reality. Accurate alignment of such 3D model over the real object is done during cube’s QR surface marker recognition by superimposing the 3D model with accurate spatial position. By combining his radiological images and proving a digital reconstructions, help surgeon to identify the position of the anatomical structures .
Arteries veins nerves could be loaded according surgeon preference and spatially presented in real time in surgeons optical field. By depicting sensitive anatomical structures that can be visualised during surgery allow to avoid by surgical manipulations accidental damage giving the surgeon the advantage to navigate or intervene between these sensitive anatomical structure. Another invaluable feature is a transparency between virtual structures in Augmented Reality feature that is offered in Apple AR. Different levels of transparencies of virtual structures create a sense of depth and realism, he direction of virtual instrument extension. For example the tip of virtual instrument can be extended and by manipulation the direction can be adjusted easily and visualise in augmented reality in case the tip comes in contact with the virtual sensitive anatomical structure. This offer also the real time visualisation and allows real time correction before intervention with the real instrument.This feature offer an advance in navigation by real time a attuning the direction of insertion, helps planning the optimal direction before real attempt to find the target point of interest and predicting the real depth of insertion by simply visualising and simulating in augmented reality the contact with the target point of interest .This feature allow the surgeon before actual intervention searching the optimal way before actual intervention Surgeons can operate under the guidance of the 3D models and the planned operation path.
X-ray C-arm fluoroscopy correction technique by Cube reference position.
Intra-operative X-Ray-C-arm fluoroscopy is an important tool in orthopedic surgery. Medical experts utilize imaging data acquired by mobile C-arm devices in everyday surgery to position implants, fixate bone fractures, and correct the physiological and mechanical alignment of the skeletal apparatus. We offer the ability to accurately reconstruct three-dimensional information, attributable to the difficulty in obtaining the pose of X-ray images in 3D space. We
In case surgeon feels that registration in not spatially accurate or reference cube has been moved and displaced by his original position at imaging our method offers correction during surgery
In case the cube has changed position accidentally then can be aligned again based on fluoroscopy imaging. By Intraoperative fluoroscopy of the reference cube by the advent of Correction techniques that developed by orthopractis.com advanced algorithms allow to reposition the virtual 3D patient model in AR enviromment giving surgeon an extra advance the to correct the position of 3d model .
Correction technique
Reference cube contains embedded in predefined position eight metal spheres with different dimension and th act as a fiducial markers which are also spatially bounded to the location of the patient anatomical structures of interest during segmentation. Metal spheres are constructed with materials that are easily distinguishable in both CT and X-ray images to facilitate accurate registration.
Work flow of correction technique.
Patients CT scan imaging is performed with the Reference Cube placed adjacent to the patient human region of interest. The cube is visible in the CT images. Software is used to create a 3D reconstruction (segmentation) of the patient region of interest while metal spheres in the cube are also being segmented and their positions recorded providing a detailed 3D patient model with the cube. Registration is also established the spatial relationship between the cube and the region of interest pelvis in the CT data and is calculated a transformation file output in txt format that correlates the position of the cube with the segmented patient region of interest.
X-Ray-C-arm fluoroscopy Imaging:
By X-Ray-C-arm fluoroscopy Imaging acquisition intraoperatively of the patient region of interest and reference cube, likewise in the CT data eight metal spheres are clearly identifiable in the X-ray image depicted on C arms screen. Extracting these 2D images and are send back to the Processing Station (PS) for post image analysis.
Post-Imaging Analysis:
Post-Imaging Analysis entails fiducial semiautomatically localisation of these eight spheres in the X-ray images. Digitally marked the center of each sphere the positions of the metal spheres are identified and their 2D coordinates are obtained. Employing a 2D-3D registration and matching technique to correlate the 2D X-ray image with the 3D CT scan data. a 2D-3D transformation is calculated. X-ray imaging geometry to the previous CT coordinate system segmentation imaging is related by sophisticated software capable of handling image processing and registration tasks, and correcting variances in patient positioning and imaging geometries. This process leverages the known positions of the metal spheres in the cube as reference points, enabling accurate registration between 2D and 3D imaging modalities
Final pose correction:
In practice, the above process in imaging geometries ends with output file in txt format which have to be dowloaded from the Processing Station (PS) and loaded in the applications in order to correct the position of reference cube to virtual 3d patient images.
This technique is particularly useful for clinical applications, such as aligning pre-operative plans with intra-operative imagery, or for confirming the accuracy of visual 3d patient image placements intra operatively where a new ct imaging during surgery is not feasible
Using these steps, you can infer the new position of the patient 3d virtual region of interest and proceed to next step.
segmentation from the CT in the coordinate system of the X-ray image..
.
It is important that the fiducial markers (metal spheres) are constructed with materials that are easily distinguishable in both CT and X-ray images to facilitate accurate registration.
The spatial configuration of these components enables the use of a single image to estimate the marker pose in 3D. In the original work the components of the marker were segmented in the X-ray image and the distance between these points and the projection
with wearable head-mounted displays (HDMs) that give the surgeon an augmented reality (AR) visualization while operating on the patient. AR is a technology that superimposes a computer-generated virtual scenario atop an existing reality, allowing synchronized observation of the digital information and the real surgical field. A new wearable head mounted display (HMD) based on AR (Video and Optical See-Through
Augmented Reality Surgical System
Surgical registration is achieved by matching the corresponding
structures in two spaces. There are two available methods of noninvasive
surgical registration: marker-based registration and marker-less
registration
Marker-based registration methods identify the marker points on the model reconstructed by the preoperative medical image of patient and then register them with the corresponding marker points on the intraoperative anatomical structure of patient. The types of marker points are mainly divided into paste-type markers [9], implantable markers [10] and anatomical landmarks [11]. Paste-type markers need to be pasted on patient during the surgery and the medical imaging scan is reperformed, which is heavy, complicated and time-consuming. Once any marker point is displaced during the surgery,
the surgical registration accuracy will be impacted obviously.
Implantable markers ensure the high registration accuracy, but cause
additional surgical trauma for marker implantation [12]. The surgical
registration based on anatomical landmarks is usually used in neurosurgery
of the head because the distinctive features on the human body
are required, such as the tip of the nose and the corners of the eyes.
The marker-less surgical registration method based on surface-
matching is proposed to make up for the deficiency of marker-based
registration method. The characteristic of global average of marker-
less method can overcome the error caused by inaccurate positioning
of markers
In the marker-less surgical registration, the rigid
transformation relationship between the image space and surgical space
is obtained by matching the three-dimensional (3D) point cloud in two
spaces. The point cloud in image space is obtained by 3D reconstruction
of the preoperative medical images and the point cloud in the surgical
space is acquired by laser scanner, structured light 3D camera or TOF
vision system [15]
Apples Visio pro Optical see-through head-mounted device
Experimental Set-Up
hipPhantom For the tests
A 3D-printed replica of young human children hip anatomy (“ hip1 phantom”) was designed and produced. Starting from real computed tomography datasets of a children with developmental dysplasia of the hip left, the pelvic and femoral bones were extracted with a semiautomatic segmentation pipeline and a complete 3D virtual model of the pelvis was obtained (Figure 2). From the virtual model, a tangible phantom made of acrylonitrile butadiene styrene (ABS) was produced via 3D printing (3D printer Dimension Elite, Stratasys, Eden Prairie, MN, United States). The femoral nerve and vessels ischial were added to pelvis virtual model .To obtain the physical replicas of the pelvis ad hoc molds were designed, and 3D printed, then silicone casting was made using these molds. figure 2
Once Acetabular device (A) is positioned over reamer handle according to manufacturer instruction- it is absolutely necessity to safeguard industrial instruction and calibration in order to measure accurate. -the dedicated QR marker is recognised and a white sphere appears over the centre of QR marker surface with two cylides in perpendicular fashion conforming the insertion rod. The tip sphere in augmented reality (AR) by the app coincide with the mechanical tip of the rod ending in reamer or cup.
For Orthopractis ® navigation system (ONS) the following are required :
a. Apple Vision pro
b. insertion tool which carry the Tracking device, a QR code mounted over of which is attach over Needle holder.
c.Reference Cube.
d. Processing Station (PS) support facility.Where CT patient imaging are process and virtual 3d patient imaging is to be downloaded to apple vision pro.
e. Hip navigator App installed in app vision pro.
Workflow for using the Orthopractis ® navigation system (ONS) is as follows:
1.Preoperative CT Imaging:
CT scans of the patient are conducted before the procedure where a reference cube is also placed near patient region of interest site. Reference Cube act as a reference later for the navigation system. Obtained detailed anatomical information and images with the reference cube as images (DICOM )should be transferred to the company's Processing Station (PS).
2. Segmentation and Marking at Processing Station (PS) :
At the PS, the patient images are processed to identify and segment relevant anatomical structures, such as vessels and nerves. Points of interest based on the surgeon's requirements are marked and exported. Virtual 3d patient images (UDSZ files) with Segmented anatomical structures and points of interest are prepared at PS.
3. Augmented Reality Device Setup Tool Calibration :
Virtual 3d patient images (UDSZ files) interest should be downloaded from PS. The surgeon starts the Hip Navigator app on the Apple Vision Pro device. Insertion tool with a mounted dedicated QR code is attached to needle holder and calibrated. The QR code is recognized by the app and used to detect and track the trajectory of the needle in real time.The app uses the downloaded files along with the real-time tracking of the needle trajectory to provide the surgeon with an augmented reality visualization.
4. Procedure Execution,Real-Time Visualization, Augmented Reality Guidance Localization of Puncture Site:
The surgeon wears a head-mounted augmented reality device, Apple Vision Pro. As the surgeon moves the needle holder over the patient's body, the Apple Vision Pro device overlays the real-time virtual anatomical model 3d patient images (UDSZ files ) extracted from the CT imaging. The points of interest, marked during the preoperative segmentation, are also visualized in augmented reality. With the augmented reality visualization, the surgeon can precisely locate the most appropriate puncture site in real times just before attempting to reach the target. Based on the augmented reality guidance and visualization, the surgeon performs the needle insertion and advancement towards the target, while continuously assessing the trajectory and endpoint in the augmented reality environment
Disclaimer.
Regarding its plausibility app system is not validated offers no diagnosis or treatment,provide an early indication that further evaluation may be warranted by Speciality Doctor. Explicitly is announced that the apps are not for diagnosis. Clinical judgment and experience are required to properly use the software. App alone do not replace an M.D. or specialist. All information received from the App output must be reviewed before any attempted treatment. The software is not for primary image interpretation. Any influence to the operators in making decisions remains user own responsibility and experience. App does not dispense medical advice. Patient should seek a doctor’s advice in addition using the app and /or before making any medical decisions for themself. Never substitute or replace doctor's advice or change treatment modalities based on any measured outcome. App indicated for assisting healthcare professionals for scientific and research reason. Clinical judgment and experience are required to properly use the app and further research and validation is pending in coming future.The app is not a substitute for professional medical advice.Any medical information provided by the app should be used with caution and not relied upon exclusively for medical decision-making.