What is the minimum number of projections that should be taken of the ankle and wrist joints?

  • PDFView PDF

Summary

Objective

To estimate the current and future (to year 2032) impact of osteoarthritis (OA) health care seeking.

Method

Population-based study with prospectively ascertained data from the Skåne Healthcare Register (SHR), Sweden, encompassing more than 15 million person-years of primary and specialist outpatient care and hospitalizations. We studied all Skåne region residents aged ≥45 by the end of 2012 (n = 531, 254) and determined the prevalence of doctor-diagnosed OA defined as the proportion of the prevalent population that had received a diagnosis of OA of the knee, hip, hand, or other locations except the spine between 1999 and 2012. We projected consultation prevalence of OA until year 2032 using Statistics Sweden's (SCB) projected age and sex structure and prevalence of overweight and obesity.

Results

In 2012 the proportion of population aged ≥45 with any doctor-diagnosed OA was 26.6% (95% confidence interval (CI): 26.5–26.8) (men 22.4%, women 30.5%). The most common locations were knee (13.8%), hip (5.8%) and hand (3.1%). Of the prevalent cases 26.8% had OA in multiple joints. By the year 2032, the proportion of the population aged ≥45 with doctor-diagnosed OA is estimated to increase from 26.6% to 29.5% (any location), from 13.8% to 15.7% for the knee and 5.8–6.9% for the hip.

Conclusion

In 2032, at least an additional 26,000 individuals per 1 million population aged ≥45 years are estimated to have consulted a physician for OA in a peripheral joint compared to 2012. These findings underscore the need to address modifiable risk factors and develop new effective OA treatments.

Keywords

Osteoarthritis

Knee osteoarthritis

Epidemiology

Cited by (0)

Copyright © 2014 Osteoarthritis Research Society International. Published by Elsevier Ltd.

  • Loading metrics

Open Access

Peer-reviewed

Research Article

  • Riccardo Bravi,
  • Diego Minciacchi

3D reconstruction of human movement in a single projection by dynamic marker scaling

  • Erez James Cohen, 
  • Riccardo Bravi, 
  • Diego Minciacchi

x

  • Published: October 18, 2017
  • //doi.org/10.1371/journal.pone.0186443

Figures

Abstract

The three dimensional (3D) reconstruction of movement from videos is widely utilized as a method for spatial analysis of movement. Several approaches exist for a 3D reconstruction of movement using 2D video projection, most of them require the use of at least two cameras as well as the application of relatively complex algorithms. While a few approaches also exist for 3D reconstruction of movement with a single camera, they are not widely implemented due to tedious and complicated methods of calibration. Here we propose a simple method that allows for a 3D reconstruction of movement by using a single projection and three calibration markers. Such approach is made possible by tracking the change in diameter of a moving spherical marker within a 2D projection. In order to test our model, we compared kinematic results obtained with this model to those with the commonly used approach of two cameras and Direct Linear Transformation (DLT). Our results show that such approach appears to be in line with the DLT method for 3D reconstruction and kinematic analysis. The simplicity of this method may render it approachable for both clinical use as well as in uncontrolled environments.

Citation: Cohen EJ, Bravi R, Minciacchi D (2017) 3D reconstruction of human movement in a single projection by dynamic marker scaling. PLoS ONE 12(10): e0186443. //doi.org/10.1371/journal.pone.0186443

Editor: Sliman J. Bensmaia, University of Chicago, UNITED STATES

Received: May 18, 2017; Accepted: October 2, 2017; Published: October 18, 2017

Copyright: © 2017 Cohen et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Data Availability: All relevant data are within the paper and Supporting Information files.

Funding: The authors received no specific funding for this work.

Competing interests: The authors have declared that no competing interests exist.

Introduction

The clinical assessment of biomechanical parameters is fundamental for both rehabilitation and prevention. As such, the need for objectivity when evaluating appears to be of great importance. However, more than often the assessment of such parameters is still more subjective than warranted, limited to a simple observation by the examiner of the movements performed and scoring the performance according to various clinical scales (e.g., [1–4]). To overcome this difficulty, various instruments exist that allow for more objective measurements. These instruments vary in complexity and in costs, from simple manual goniometers to refined automatic kinematic assessments tools (e.g., [5–6]). However, when evaluating complex multi-segmental movements frequently the use of the more expensive and refined tools is called for.

One of the corner stones of biomechanical evaluation is the dynamic study of the body in its entirety during movement along with a three dimensional (3D) reconstruction, often achieved by means of some acquisition system, from simple video cameras to complex capture systems (e.g., [7–8]). Such evaluation generally requires dedicated spaces and, frequently, trained personnel for its operation. Therefore, the introduction of low-cost, flexible, and simple tools for dynamic analysis and 3D reconstruction of full body movement may provide the basis for a much wider implementation of these types of biomechanical assessments, in both clinical use as well as in uncontrolled environments.

The use of video for kinematic analysis of human movement represents a simple tool for biomechanics studies. While not as accurate as optical capture systems, it does provide easily obtainable valid data at a lower cost and does not require highly trained personnel for its operation (see [9]); therefore, it may satisfy some of the prerequisites for a widespread implementation. One of the issues regarding video analysis is the reconstruction of the movement in a three dimensional (3D) space, commonly requiring the use of at least two cameras. However, while two cameras are able to localize markers in 3D, more than often some of the markers may be hidden during capturing and, therefore, provide partial information and/or necessitate the addition of more cameras, causing an increase in costs. Also, in order to use two cameras, adequate space must be dedicated that allows for a complete acquisition.

Several approaches have been proposed for 3D reconstruction from video cameras. Widely used is Direct Linear Transformation (DLT) that, with a minimum of 6 calibrated markers, is able to link the information provided by two cameras to reconstruct a 3D space [10] (the comparison between the DLT method and other approaches is beyond the scope of this paper; for comparisons and considerations between calibration methods see [11]). This approach, however, does have its downsides. The first already mentioned, is the obligated use of at least 2 cameras. The second is that, when relating image points to object points, a series of constants must be used. These constants for each camera are represented by the projection coordinates, global coordinates, and a series of coefficients that relate the two. Therefore, for each point we have 5 knowns (i.e., 2 for the projection and 3 for global coordinates) and 11 unknowns (i.e., coefficients) per camera. These coefficients, or DLT parameters, are expressed by two equations per projection point. To find these unknown parameters, at least 11 equations are needed, per camera. This can be done by adding calibration points. For each additional calibration point, two new equations are introduced, while the DLT parameters remain the same. Therefore, by using 6 calibration points, which yield 12 equations, we are able to solve for the DLT parameters (for a detailed explanation of the DLT method see [10]). Therefore, to calibrate the system according to the DLT method, a minimum of 6 accurately placed calibration points are needed, and we have to create the transformation matrix for each point, which may result in quite a tedious procedure. Also, when a marker is not visible on one camera the reconstruction cannot be made for said marker.

An appealing alternative is the reconstruction of movement by using a single camera. Not only for the reduction in number of cameras, and therefore costs, but also in cases in which a single camera is used for 2D analysis, a 3D reconstruction may provide additional information from the same recording. An example for this is gait analysis, where only a sagittal view is considered (e.g., [12–13]) leaving the information obtained as partial. A few studies have addressed the issue of 3D reconstruction by a single projection, providing different methods (e.g., [14–17]). Worth noting is the work of Yang and Yuan [18], in which by adding kinematic constraints associated with a human biomechanical model the authors were able to reconstruct a 3D movement from a single camera quite precisely. However, this approach is based on the same principals as the DLT method and therefore requires the solution for 11 parameters. The reduction of the DLT principals to a single camera with the added kinematic constraints as well as the need for anthropometric data further increased the complexity of this method, rendering it less approachable for personnel with no mathematical background. Also, the use of kinematic constraints renders each calibration subject-specific, and not setup specific, which may cause a great increase in the time for preparation and analysis.

Another general issue that merits attention is the use of the commercially available cameras for which, with the advances in video technology, the accuracy of video-obtained data has increased and more low cost alternatives to specialized cameras have emerged. In fact, the use of webcams and action cameras have been successfully implemented for biomechanical analysis of movement [19–20]. For these types of cameras, more than often information relative to the intrinsic properties of the camera (e.g., focal length, sensor specifications) is not readily available and, consequently, some reconstruction methods may not be employed (as also mentioned by [18]).

In a clinical setting, the implementation of an objective biomechanical assessment, is still far from widespread. This may be due to a series of factors. As mentioned earlier, most of the elaborated systems for biomechanical analysis require a dedicated space (e.g., [21]), which is far greater than that found in a typical examination room, let alone at patient’s bedside or during house calls. Even for a two camera setup, the space required to assure visibility of the entire body, though variable between cameras, exceeds that of a common examination room.

In single camera-based approaches space is not an issue. However, the increase in complexity for implementation of these method, due to the reduction in cameras, may greatly limit their usage. When considering that healthcare professionals are concentrated on specific field of expertise, it is not surprising that the most may not possess adequate knowledge or preparation for the application of said methods.

Another general consideration is that the majority of calibration processes are setup-specific, meaning that once the cameras are calibrated they cannot be moved which reduces the mobility of the system and, therefore, may obstacle a common day-to-day use in dynamic environments, such as those found in clinical practice.

In addition, as costs and resources are also to be considered, acquisition of specialized cameras specific for movement analysis is not always possible. Especially today, where most portable devices are able to provide fairly decent video recordings at hand’s reach [22], acquisition of specialized equipment may and should be avoided when possible.

When taking all of these considerations together, it is obvious that in order to render an objective biomechanical assessment widespread, a simple, mobile instrument that is camera independent and does not necessitate any specific background is needed.

Here we propose a simple approach that requires a minimum of 3 markers for calibration and is able to reconstruct movement in a 3D space with a single projection. Such algorithm is based on the scaling effect provided by a two dimensional projection. Seeing that the scaling effect occurs throughout the movement, we called this method dynamic marker scaling (DMS). This approach is independent from the intrinsic properties of the camera and may be widely implemented. In order to test the validity of the DMS method, we compared it to the commonly used DLT method with two cameras.

Materials and methods

Subject and task

A normal subject (female, age 25, height 167 cm, 47 kg) was analyzed for linear and angular kinematics of the entire body during a simple lifting task of a box (dimensions 10 x 4 x 2 cm, 100g). No indication was provided to the subject regarding how the task is to be performed. The experimental protocol conformed to the requirements of the Federal Policy for the Protection of Human Subjects (U.S. Office of Science and Technology Policy) and Declaration of Helsinki, and has been approved by the Research Ethics Board of our Institution (Local Ethics Committee, Azienda Ospedaliero Universitaria Careggi, Florence, Italy). The participant provided informed consent in written form.

Cameras

Three cameras were used for data acquisition (GoPro Hero 5 Black), 2 for the DLT method and one for the DMS method. The DLT cameras were positioned orthogonally from one another forming a 45° from the center of the working area at a distance of 220 cm from the center point. The camera used for the DMS method was placed at a distance of 272 cm and frontal to the center of the working area (Fig 1). In order to compare the same movement for DLT and DMS, all three cameras were synchronized by using GoPro Smart Remote control. Video acquisition was set for resolution of 1080p at 120 frames per second for the DLT and DMS cameras. Other settings included: Field of view-Narrow, Color-flat, WB-3000K, ISO-1200, EV Compensation- -0.5.

Fig 1. Setup.

A diagram of camera placement for acquisition. The cameras used for DLT were placed at a 45° angle from the center point, while the DMS camera was placed frontal to the working area (in red). Also, placement of the calibration markers for the DMS method are shown.

//doi.org/10.1371/journal.pone.0186443.g001

Calibration

For DLT calibration 19 spherical markers were placed at known locations, fixed to a static structure thus distributing the markers throughout the working area (Fig 2). For the DMS method, three spherical markers were placed on the ground at known distances from the camera (180 cm, 272 cm, and 364 cm, Fig 1). The distances were chosen to delineate the working area. All of the markers used in this study were 2.4 cm in diameter. For implementation, our algorithm requires the following parameters: marker height (simplified by placing markers on the ground), marker diameter, camera height (considered from ground to lens center, measured at 90.6 cm), marker distance from camera’s plane (i.e., ground distance). Seeing that scaling of objects occur in reference to the center of the camera’s lens, the actual camera distance was calculated from the measured ground distances (as the ground distance and camera height are known, see Fig 3). According to this calculation, a ground marker placed at 180 cm has a camera distance equivalent of 201.51 cm, considering a camera height of 90.6 cm. Therefore, if a marker at 180 cm (ground distance) is known to have a certain diameter when projected, any marker that measures the same diameter can be considered to be placed at a distance of 201.51, in any direction, from the camera’s center (i.e., camera distance; Fig 4).

Fig 2. DLT setup.

A diagram of the placement of calibration markers for the DLT method. A static structure formed by two fixed orthogonal frames was built to delineate the working area, to which markers were fixed at known positions. For each marker, global coordinates in cm are shown in the parenthesis as XYZ.

//doi.org/10.1371/journal.pone.0186443.g002

Fig 3. Relationship between position and projections.

A diagram describing differences between actual positions of markers, relative to the camera, and their projections. Camera Height is considered as the measured height from the ground to the lens center. Camera Distance is considered as the distance between the marker’s center and the center of the camera’s lens. Ground Distance is considered as the distance from the marker’s center to the camera’s plane. Also shown are differences in projection height, where more distant markers are projected higher than closer ones, as well as diameter changes relative to distance, with closer markers appearing bigger than more distant ones.

//doi.org/10.1371/journal.pone.0186443.g003

Fig 4. Camera distance.

A diagram demonstrating that when a marker measures a certain diameter, said marker will have a specific camera distance independent from its direction. Corresponding camera distances are shown in red lines, all of which are equal to the measured camera distance. An example from our measurements is a projected diameter of 24 pixels, and a corresponding camera distance of 203.56 cm, in any direction.

//doi.org/10.1371/journal.pone.0186443.g004

Calibration for conventional cameras is usually based on a perspective projection model, known as the pinhole camera model. While calculation based on a pinhole camera model can be solved for with simple projection equation (e.g., u = fX/Z, v = fY/Z; where u and v are the projected coordinates, X, Y, and Z are the real world coordinates, and f is the focal length), due to camera parameters that don't match the pinhole model (e.g., large aperture, lens distortion, etc.) as well as our scope to create a camera independent model (i.e., that does not rely on prior knowledge of the intrinsic parameters of the camera, specifically the focal length) the equation needs to be made more general. As such, for a dynamic analysis through marker scaling the following premise was considered. There is an inverse relationship between marker size and its distance from the camera (i.e., marker diameters grows as the distances reduces; Fig 5). Therefore, two asymptotes are present according to these conditions allowing for an implementation of a negative power function based on: where f(x) represent the camera distance, and x represents the marker’s projected diameter. By including a scaling factor and noise effect the resulting function is: where b is a negative number. During the formulation of our model, we have conducted several trials in which markers were placed at different distances, and their projected diameters measured. The results obtained with this function appeared to be in line with our measurements. To test the goodness of fit of our function we used the curve fit tool of Matlab, plotting 13 measurements of distances and diameters. Our calculated coefficients matched those obtained with the curve fit tool, with an R2 and adjusted R2 values of 0.999, a root mean squared error value of 0.07, and a sum of squared errors of prediction value of 0.04 (see Fig 6).

Fig 5. Relationship between diameter and distance.

A diagram describing the relationship between a marker’s camera distance and its diameter. The curve of this relationship takes the form of an inverse power function. From this function it is possible to see that the closer the marker is to the camera center, its size approaches infinity and as its diameter approaches zero, the distance grows to infinity. In the figure, we have included also some of the measurements obtained by us relative to the marker’s camera distance (in cm), and its projected diameter (in pixel).

//doi.org/10.1371/journal.pone.0186443.g005

Fig 6. Goodness of fit of function.

Plotted values of paired measurements of camera distances (in cm) and corresponding diameters (in pixels). The red curve represents the function that was fitted to the measurements. For the goodness of fit values, see text.

//doi.org/10.1371/journal.pone.0186443.g006

In order to solve for our equation, seeing that there are 3 unknown coefficients, a minimum number of 3 known points is needed. While the choice to use a power function may be reasonable enough, it is still arguable how accurate this function may be. However, considering the fact that we are interested only in results occurring within a limited numerical range (i.e., working area), and that our calibration markers were placed at the limits and center of said range, measurements obtained by the function are expected to be relatively accurate.

After resolving for the camera distance, a correction factor should be used for the Y and X axes seeing that, due to perspective distortion, objects more distant from the camera appear closer to the center (i.e., optical axis). For example, on the Y axis, more distant objects appear higher when placed below the optical axis of the camera, and lower than they above (Fig 3). On the X axis, more distant objects appear more medial whereas closer objects appear more lateral. In order to resolve for perspective, the known marker diameter can be used. By taking the projected diameter of a marker and calculating the projected distance of that marker from a reference point (for simplicity we used the axes origins), we can quantify that same distance in terms of projected diameter instead of pixels. Seeing that the actual diameter of the marker is known, the conversion of that measurement into centimeters is made by a simple multiplication which could be expressed by the following equation: actual(Y) = (projected(y) /projected(diameter)) * actual(diameter). By using a spherical marker the same approach can be used for both the X and Y axes, this concept is exemplified in Fig 7.

Fig 7. Perspective correction.

A diagram describing the correction for perspective distortion. By using the measured projected diameter of a marker as well as the distance to some reference point, it is possible to calculate the ratio between projected diameter and distance to the reference. As the real marker diameter is known, it is possible to multiply the ratio by the real diameter, therefore obtaining the actual distance from the reference. This may be applied to both the X and Y axes. In the diagram we can see two markers (bold circles), that are placed at different distances from the camera (as shown at the bottom of the figure) and that in their projection (bold circles in the frame) appear to have a different diameter (with more distant marker having a smaller projected diameter), and a different localization (with the more distant marker appearing higher and more medial). As we can see the markers are effectively placed one behind the other (in the bottom of the figure) and, in fact, when calculating the ratio between projected diameter and distance to the reference, both present the same ratio meaning that in reality are placed one behind the other (i.e., having the same X and Y global coordinates).

//doi.org/10.1371/journal.pone.0186443.g007

After obtaining data relative to the X and Y axes, a conversion of the measured camera distances obtained into distances from the camera’s plane (i.e., Z axis) is necessary. This conversion can be made by using the Pythagorean theorem, with the measured camera distance and the obtained Y value.

Data acquisition

For a full body analysis of the movement, 12 joints were considered (ankle, knee, hip, shoulder, elbow, and wrist joints) and additional markers were placed on the head, feet and trunk for a total of 16 points of interest (see Fig 8). To assure joint tracking, 2 markers were placed per joint (3 for the knees and ankles). For both DLT and DMS methods, marker tracking was done manually using the open source software Tracker (//physlets.org/tracker/). The data was extracted from Tracker as x-y coordinates for each tracked point which were then analyzed using Matlab.

Fig 8. Marker placement.

A diagram illustrating marker placement on the subject. As shown, 16 points were taken into consideration with 2 markers per joint for shoulders, elbows, wrists, and hips 3 markers per joint for the knees, and ankles, and a single marker for the head, trunk and left and right feet. Also shown are the joint angles, illustrated in the right part of the diagram; angles are named according to the joint at the vertex.

//doi.org/10.1371/journal.pone.0186443.g008

For DMS analysis, other than the coordinates of the marker, we are also interested in its diameter and, therefore, tracking points were placed at each side of the marker. Thus, by calculating the difference between the two tracked points, the marker’s diameter may be retrieved (Fig 9). The use of spherical markers for diameter acquisition provides the advantage that the dimensions of the marker do not change with movement. As long as the marker is half visible, data regarding its distance may be retrieved. Also, by using spherical markers, the perspective correction for both X and Y axes is simplified significantly.

Analysis

The bases of biomechanical analysis from video recorded data is the extraction of kinematic parameters. Of these parameters fundamental data are represented by the change in joints positions and angles over time, from which it is possible to calculate other kinematic parameters such as linear and angular velocities, accelerations as well as a more in depth analysis via inverse dynamics for kinetic data. In order to calculate for the various kinematic parameters, we have constructed an algorithm with Matlab. Seeing that the DLT and DMS methods are calibrated differently, we limited our analysis to linear displacement and angles, both of which are independent from the coordinate system used.

Linear displacement was calculated as the change in position from the starting position for every point in time. In order to resolve for displacement in a three dimensional space, a vectorial calculation is necessary. Therefore, the change in position for every frame was calculated for each axis separately (e.g., xi-x0). Then the three dimensional displacement for a frame was calculated as the square root of the sum of the changes in position of each axis squared.

Angle calculation was made by taking the 3D coordinates of three points at a time, considering the middle point as the vertex. First, the rays were calculated as the vectors between the first to middle and middle to third points. Then the norm of dot and cross products of the vectors was obtained, and the four-quadrant inverse tangent of the norm was found giving the angle for the three points in radians, which was then converted to degrees.

The following angles were considered for each side of the body: ankle (foot-ankle-knee), knee (ankle-knee-hip), hip (knee-hip-trunk), shoulder (trunk-shoulder-elbow), elbow (shoulder-elbow-wrist), Fig 8.

Statistics

The coefficient of determination (R2) was used to determine the closeness of fit between measurements obtained with the DLT and the DMS methods. The residual differences between the two methods were measured for each joint in order to quantify the magnitude of the differences, reported here as mean, standard deviation (SD), and maximal residual difference (RD). Although usually used to determine the noninferiority or equivalence between treatments [23], we believe that an equivalence test may help to better characterize the level of similarity between the two methods. Therefore, a Two One-Sided Test (TOST) was implemented to better define the equivalence between the two methods [24]. Seeing, however, that the equivalence margins are not defined in current literature for this type of analysis, we have used the equivalence test to find said margins. This way, we hope to provide at least some quantification of the accuracy between measurement, which may benefit future studies in this field.

Results

The two approaches, overall, provided relatively similar results (for raw data see S1 Data). The 3D reconstructions acquired from both methods along with the image sequence of the real movement are shown in Fig 10 whereas graphical representations of the results are shown in Fig 11 and Fig 12.

Fig 10. A reconstruction of the movement from both the DLT and DMS methods.

The actual action, as sequenced images is displayed along with the reconstruction for each method. For simplicity, only 1 every 10 frames is shown for reconstructions and image sequence. Reconstructions are shown in three different points of view: front, top, and side views. To differentiate between body segments, different colors were used for the lower extremities (black), upper extremities (red for DLT and blue for DMS) and head (cyan for DLT and green for DMS).

//doi.org/10.1371/journal.pone.0186443.g010

Fig 11. Comparison between the DLT and DMS methods for linear displacement.

DLT results (red lines) and the DMS results (blue lines) are shown in the graphs. Graphs represent the amount of displacement for each joint (in cm, from 0 to 90 cm) over time (in seconds).

//doi.org/10.1371/journal.pone.0186443.g011

Fig 12. Comparison between the DLT and DMS methods for angles.

DLT results (red lines) and the DMS results (blue lines) are shown in the graphs. Graphs represent the angle measured (in degrees, from 0 to 180) over time.

//doi.org/10.1371/journal.pone.0186443.g012

For linear displacement, R2 values were all above 0.9 (ranging from 0.92 to 0.99), with the exception of the left ankle (measuring 0.7), right and left foot (measuring 0.14 and 0.53 respectively). Mean RDs ranged from 0.45 to 3.3 cm with SDs ranging from 0.35 to 1.53 cm (see Table 1). Also, TOST values were found to be significant for all points when the equivalence margins were set to ±7.2 cm. The margins were found after repeated measurments revealed equivalence between methods for all points, considering a p-value of 0.05.

For angles, R2 values were more dispersed (see Table 2), with highest values measured for the hips, knees, and left ankle (ranging from 0.91 to 0.99). Intermediate values were measured for the two shoulders and right ankle (ranging from 0.22 to 0.73), whereas low values were measured for left and right elbows (0.01 and 0.025, respectively). Mean RDs were all under 10 degrees with SDs values measuring less than 6 degrees, with the exception of the left elbow which measured a mean RD of 11.64 (SD 8.32) degrees. TOST values were found to be significant for all angles when the equivalence margins were set to ±10.009 degrees after repeated measurements revealed equivalence between methods for all angles, considering a p-value of 0.05.

Discussion

For the most part, the results obtained with the DMS method appear to be in line with the DLT method. For linear displacement, greater differences were present for less mobile joints (i.e., feet and ankles). However, when considering the mean RDs, they were all under 1.5 cm. This difference is relatively low, especially when considering that the highest mean RD was registered for the left shoulder (measured 3.3 cm). This is further emphasized when considering the SDs, which were all low for both feet and ankles (0.35–0.69 cm) along with maximal RD of 1.49–3.87 cm. Compared to other joints, highest SD was registered to the left shoulder (1.54 cm), and highest maximal RD measured to the head (7.76 cm). This type of trend is to be expected seeing that measurements for less mobile points are more susceptible to small differences, especially when considering that both methods are an approximation of the real values, and as such, are more likely to present greater differences for fixed points.

For angles measured between the two methods, the data also seem to suggest that for the less active joints (in angular terms), results differ greatly compared to the more active joints. In fact, according to their R2 values, the joints may be divided into three groups in terms of activity: highly active (knees, hips, and ankles), moderately active (shoulders), lowly active (elbows). The exception in this case was the right ankle, which presented an R2 value of 0.46, however, when looking at the graph it is evident that this value is mostly due to dispersion (Fig 12). The level of activity of the joints may also be seen by the sequenced images of the movement, in which the elbows appear to be at a relatively stationary angle compared to the other joints, followed by the shoulders (Fig 10). When examining the graphs obtained from the angular measurements, it is possible to see that the general trend appears to be very similar between methods (Fig 12), with greater data dispersion for the DMS method compared to the DLT method. As for linear displacement, also in this case it appears that as the measurement in question is more stationary, greater differences ensue. With that in mind, still all of the mean RDs between methods were under 10 degrees, with the exception of the left elbow measuring at 11.64 degrees. Such difference may easily be attributed to relatively smaller differences between the position of joints, where even slight movements may greatly influence the angles measured, which is further magnified the more stationary the measurement is.

Some general limitations provided by both the DLT and the DSM method should be noted. As pointed earlier, video analysis produces less accurate results compared to other systems, such as optical capture systems [9]. Also, it is well known that a marker-based measurement may result in inaccuracies due to inaccurate placement, skin movement, attachment on loose clothing etc. In fact, alternative markerless-based approaches are emerging to overcome these difficulties [25].

As for specific limitation of the DMS method, as demonstrated by the graphs, is that measurements for the DMS method present a greater dispersion of data. This is mostly due to the fact that a more precise measurement is needed in order to retrieve the markers diameters and, by being a pixel-based measurement, it is more likely that the diameters measured will be skewed from one frame to another, especially when objects are more distant or less mobile. Moreover, dispersion of data may be the result of inaccuracies in acquisition due to contrast issues within the video, which may limit the visibility of contours of the markers. The importance of adequate contrast between marker and surrounding is emphasized also in other works (e.g., [25–26]). In fact, in our experience a higher dispersion of data was found for the joints in which contrast between the marker and the surroundings was lowest (i.e., head, trunk, wrists).

These inaccuracies of the DMS methods may, however, be substantially reduced by increasing the resolution of acquisition as well as the frames per seconds. The new commercial cameras, such as action cameras, provide a resolution up to 4K at 30 frames per second (reduced when frames per second are increased), which is sufficiently high to reduce measurement errors. Also, a manual tracking of the markers, instead of the automated algorithms of various software, may further increase the accuracy of the method. Finally, data dispersion may be reduced either by applying adequate filters or data smoothing.

Still, it should be considered that the DLT method also has its own inherent errors as the transformation from R2 to R3 based on only a few markers remains as only an approximation, which may be reduced by increasing the number of calibration markers. Worth mentioning is the fact that in this study we compared the DLT method calibrated according to 19 points to the DMS method, calibrated with only three diameters.

Conclusions

The DMS method appears to provide relatively similar results compared to the DLT method, at least when gross movements are concerned. This method may be used alongside the DLT method in cases in which markers become hidden in one of the cameras. This way, by calibrating the cameras also according to the DMS method, data relative to said marker may still be salvaged. Also, the algorithm presented here may be of value for acquisition of data in specific tasks such as gait, that when is studied with a single camera, only the sagittal plane is considered (e.g., [12–13]). This way information that may be obtained from a frontal plane is eliminated. Perhaps the biggest advantage of the DMS method is that the entire calibration process is very simple compared to other approaches of 3D reconstruction with a single camera, or multiple cameras in general, which translates in rapidly obtained data. Also, the use of a single camera and three markers renders it much more mobile than other methodologies. This may be of value especially when the goal of the measurments is to provide a general estimation of the movement rather than a precise description. As pointed out in a recent review by Hewett and Bates [27], preventive biomechanical interventions may help in reducing the incidence of various musculoskeletal injuries, referred by the authors as “preventive biomechanics”. Therefore, having some measurement in a clinical setting may help in identifying those people who might benefit the most from preventive interventions. The simplicity and mobility of the DMS method may render it as an adequate instrument for widespread this type of clinical use or in other uncontrolled environments.

Supporting information

References

  1. 1. Sibley KM, Straus SE, Inness EL, Salbach NM, Jaglal SB. Balance assessment practices and use of standardized balance measures among Ontario physical therapists. Phys Ther. 2011; 91(11):1583–91. pmid:21868613
    • View Article
    • PubMed/NCBI
    • Google Scholar
  2. 2. Ferrarello F, Bianchi VA, Baccini M, Rubbieri G, Mossello E, Cavallini MC, et al. Tools for observational gait analysis in patients with stroke: a systematic review. Phys Ther. 2013; 93(12):1673–85. pmid:23813091
    • View Article
    • PubMed/NCBI
    • Google Scholar
  3. 3. Claesson IM, Grooten WJ, Lökk J, Ståhle A. Assessing postural balance in early Parkinson's Disease-validity of the BDL balance scale. Physiother Theory Pract. 2017; 8:1–7.
    • View Article
    • Google Scholar
  4. 4. Lussiana T, Gindre C, Mourot L, Hébert-Losier K. Do subjective assessments of running patterns reflect objective parameters? Eur J Sport Sci. 2017; 10:1–11.
    • View Article
    • Google Scholar
  5. 5. Nussbaumer S, Leunig M, Glatthorn JF, Stauffacher S, Gerber H, Maffiuletti NA. Validity and test-retest reliability of manual goniometers for measuring passive hip range of motion in femoroacetabular impingement patients. BMC Musculoskelet Disord. 2010; 11:194. pmid:20807405
    • View Article
    • PubMed/NCBI
    • Google Scholar
  6. 6. Cancela J, Pallin E, Orbegozo A, Ayán C. Effects of three different chair-based exercise programs on people over 80 years old. Rejuvenation Res. 2017; pmid:28482740
    • View Article
    • PubMed/NCBI
    • Google Scholar
  7. 7. Baskwill AJ, Belli P, Kelleher L. Evaluation of a Gait Assessment Module Using 3D Motion Capture Technology. Int J Ther Massage Bodywork. 2017; 10(1):3–9. pmid:28293329
    • View Article
    • PubMed/NCBI
    • Google Scholar
  8. 8. Mustapa A, Justine M, Mustafah NM, Manaf H. The Effect of Diabetic Peripheral Neuropathy on Ground Reaction Forces during Straight Walking in Stroke Survivors. Rehabil Res Pract. 2017; 2017:5280146. pmid:28491477
    • View Article
    • PubMed/NCBI
    • Google Scholar
  9. 9. Chèze L. The Different Movement Analysis Devices Available on the Market. In: Kinematic Analysis of Human Movement. 2014; John Wiley & Sons, Inc. //doi.org/10.1002/9781119058144.ch2
    • 10. Abdel-Aziz YI, Karara HM (1971) Direct linear transformation from comparator coordinates into object space coordinates in close-range photogrammetry. Proceedings of the Symposium on Close-Range Photogrammetry. 1971; 1:18. Falls Church, VA: American Society of Photogrammetry.
      • 11. Remondino F, Fraser C. Digital camera calibration methods: considerations and comparisons. In Proceedings of the ISPRS Commission V Symposium: Image Engineering Vision Metrology; Dresden. Institute of Photogrammetry and Remote Sensing; 2006:266–272.
      • 12. Castelli A, Paolini G, Cereatti A, Della Croce U. A 2D Markerless Gait Analysis Methodology: Validation on Healthy Subjects. Comput Math Methods Med. 2015; 2015:186780. pmid:26064181
        • View Article
        • PubMed/NCBI
        • Google Scholar
      • 13. Yang C, Ugbolue UC, Kerr A, Stankovic V, Stankovic L, Carse B, et al. Autonomous Gait Event Detection with Portable Single-Camera Gait Kinematics Analysis System. Journal of Sensors. 2016; Article ID 5036857.
        • View Article
        • Google Scholar
      • 14. Ambrosio J, Abrantes J, Lopes G. Spatial reconstruction of human motion by means of single camera and a biomechanical model. Hum. Movement Sci. 2001; 20:829–851.
        • View Article
        • Google Scholar
      • 15. Howe NR, Leventon ME, Freeman WT. Bayesian reconstruction of 3D human motion from single-camera video, in: Advances in Neural Information Processing Systems (NIPS). 2000; 12:820–826.
        • View Article
        • Google Scholar
      • 16. Bowden R, Mitchell TA, Sarhadi M. Reconstructing 3d pose and motion from a single camera view. In: BMVC, Southampton, UK. 1998; 904:913.
      • 17. Wei X, Zhang P, Chai J. Accurate Realtime Full-body Motion Capture Using a Single Depth Cam- era. ACM Trans. Graph. 2012; 31 6 Article 188. 10.1145/2366145.2366207.
        • 18. Yang F, Yuan X. Human movement reconstruction from video shot by a single stationary camera. Ann Biomed Eng. 2005; 33(5):674–84. pmid:15981867
          • View Article
          • PubMed/NCBI
          • Google Scholar
        • 19. Krishnan C, Washabaugh EP, Seetharaman Y. A low cost real-time motion tracking approach using webcam technology. J Biomech. 2015; 48(3):544–8. pmid:25555306
          • View Article
          • PubMed/NCBI
          • Google Scholar
        • 20. Bernardina GR, Cerveri P, Barros RM, Marins JC, Silvatti AP. Action Sport Cameras as an Instrument to Perform a 3D Underwater Motion Analysis. PLoS One. 2016; 11(8):e0160490. pmid:27513846
          • View Article
          • PubMed/NCBI
          • Google Scholar
        • 21. Riberto M, Liporaci RF, Vieira F, Volpon JB. Setting up a Human Motion Analysis Laboratory: Camera Positioning for Kinematic Recording of Gait. Int J Phys Med Rehabil. 2013; 1:131.
          • View Article
          • Google Scholar
        • 22. Boissin C, Fleming J, Wallis L, Hasselberg M, Laflamme L. Can We Trust the Use of Smartphone Cameras in Clinical Practice? Laypeople Assessment of Their Image Quality. Telemed J E Health. 2015; 21(11):887–92. pmid:26076033
          • View Article
          • PubMed/NCBI
          • Google Scholar
        • 23. Walker E, Nowacki AS. Understanding equivalence and noninferiority testing. J Gen Intern Med. 2011; 26(2):192–6. pmid:20857339
          • View Article
          • PubMed/NCBI
          • Google Scholar
        • 24. Rogers JL, Howard KI, Vessey JT. Using significance tests to evaluate equivalence between two experimental groups. Psychol Bull. 1993; 113(3):553–65. pmid:8316613
          • View Article
          • PubMed/NCBI
          • Google Scholar
        • 25. Ceseracciu E, Sawacha Z, Cobelli C. Comparison of markerless and marker-based motion capture technologies through simultaneous data collection during gait: proof of concept. PLoS One. 2014; 9(3):e87640. pmid:24595273
          • View Article
          • PubMed/NCBI
          • Google Scholar
        • 26. Magalhaes FA, Sawacha Z, Di Michele R, Cortesi M, Gatta G, Fantozzi S. Effectiveness of an automatic tracking software in underwater motion analysis. J Sports Sci Med. 2013; 12(4):660–7. pmid:24421725
          • View Article
          • PubMed/NCBI
          • Google Scholar
        • 27. Hewett TE, Bates NA. Preventive Biomechanics. Am J Sports Med. 2017; 1:363546516686080.
          • View Article
          • Google Scholar

        What is the minimum number of projections taken for a post reduction study of the ankle?

        A minimum of 3 projections when joints are in the prime interest area.

        What is the minimum number of projections taken for a study of the elbow?

        RAD 111.

        What is projection in radiology?

        Projection refers to the way the x-ray beam, like an arrow, passes through the body when the person is in that position. Remember, that arrow can pass through and project front to back, back to front, side to side, and so forth.

        What is positioning in radiology?

        The radiographic position allows the viewer to describe the radiograph with regards to the location of the anatomic structures in relation to each other. This is key, even in veterinary medicine. A standard anatomical position is a way to ensure that a universal language exists when describing the body.

        Toplist

        Neuester Beitrag

        Stichworte