Skip to content
BY 4.0 license Open Access Published by De Gruyter July 28, 2020

Kinect Controlled NAO Robot for Telerehabilitation

  • Md Assad-Uz-Zaman EMAIL logo , Md Rasedul Islam , Mohammad Habibur Rahman , Ying-Chih Wang and Erin McGonigle

Abstract

In this paper, we focus on the human upper limb rehabilitation scheme that utilizes the concept ofteleoperation. Teleoperation can help the therapist demonstrate different rehab exercises to a different group of people at the same time remotely. Different groups of people from a different place connected to the same network can get therapy from the same therapist at the same time using the telerehabilitation scheme. Here, we presented a humanoid robot NAO that can be operated remotely by a therapist to demonstrate the exercise to a patient. To mimic the movement demonstrated by the therapist, Kinect V2 sensor which is a markerless vision-based motion-tracking device, was used. Modified Denavit-Hartenberg (DH) convention was used for the kinematic modeling of the human upper arm. From the Kinect data, a geometric solution was developed to find a unique inverse kinematic solution of human upper-extremity. Experimental results revealed that NAO could be teleoperated successfully to instruct and demonstrate patients to perform different arm movement exercises in real-time.

1 Introduction

Stroke is one of the major causes of human upper limb impairment. Moreover, it is the fifth leading cause of death for Americans. Statistics reveal that in every 40 seconds someone in the United States suffers from a stroke and in every 4 minutes, someone dies [1]. Including the cost of health care services, medicines, and missed days of work, the total estimated cost is approximately $34 billion each year in the United States [1]. According to the London: Royal College of Physicians, 77% of the stroke survivors, experience altered arm function, among them 40% are left with a non-functional arm [2]. Recovery from a post-stroke disability is obtained by the rehabilitation programs. Such rehabilitation programs are often labor intensive, and therapists usually work with patients one-on-one. Rehabilitation therapy in all phases of post-stroke includes intensive, highly repetitive tasks and task-specific training [3, 4]. However, in the US, only 30% of stroke survivors can receive rehabilitation [5] due to an insufficient number of rehabilitation facilities [6]. All these post-stroke rehabilitation challenges demand to develop alternate/new rehabilitation technologies. Robot assistive rehabilitation technology is one of the emerging technologies in post-stroke rehabilitation. Such technology may improve rehabilitation quality, productivity, and reduce costs of individuals with a disability.

Extensive research has been done to develop robotic devices for rehabilitation of patients with upper limb impairment. The most commonly known robotic orthotic devices include, but are not limited to, ETH Arm rehabilitation (ARMin) [7], InMotion2 developed by Interactive Motion Technologies, Inc., Boston, MA (commercially available version of the MITManus) [8], The Mirror Image Movement Enabler (MIME) [9], ETS-MARSE [10].

Recently, researchers have been interested in developing robotic systems to work as a coach and perform a therapeutic role through human-robot interaction. Studies reveal that using robots in therapeutic roles keep patients motivated and provide promising and positive interaction responses [11]. Tele-operation of robotic devices is one of the emerging technology that enables researchers to use robotic devices in a therapeutic role. The main idea behind such an approach is to extract motion information of an actor’s movements and replicate these movements to a robot within the anatomical joints limits and constraints of the robot. The major challenge in this approach is to record joints position (such as upper limb joint) coordinates of the actor and accurately convert it to a joints angle to control a robotic device. Different motion tracking devices exist that are used by the researchers to track human motion. Thobbi and Sheng [12] used a Vicon tracker that use multiple markers to track human movements and applied this information to teleoperate an NAO robot. Cole et al. [13] used a markerless system consists of a single camera that tracks the motion of an actor to map it on a simulated HOAP-2 humanoid robot. To track the actor’s joint position, the actor needs to wear a special colored suit. Different researchers also used different methods to obtain joint information by tracking the actor’s performance.Wang et al. [14] transformed actor’s motion information into a critical frame, which is obtained by a Kinect sensor and applied inverse kinematics through an optimization process to obtain joint information of the NAO robot. Suleiman et al. [15] used Kinect sensors to track an actor’s movement information, then formulate it as an optimization problem where constraints are humanoid robot’s physical limitations. Rosado et al. [16] also considered a motion mimicking problem as an optimization problem, and teleoperated a simulated 4 DoF robot model by tracking actor movement information through Kinect. Ningjia et al. [17] calculated the joint angles of the robot using the spatial vector method from the data obtained by tracking a human actor through Kinect. Ude et al. [18] used a standard non-linear least square optimization method to obtain robot motion data from the actor’s tracked data.

In this paper,we used the Kinect v2 sensor to track human motion. In the recent decade, the Kinect sensor extensively used for human motion assessment and rehabilitation purposes. Kinect sensor integrated with virtual reality for rehabilitation gaming therapy shows promising results for patient’s motor and functional recovery [19, 20, 21, 22, 23]. Kinect sensor also used for human upper and lower extremity motion assessment for the purpose of recovery improvement tracking and providing feedback [24, 25, 26, 27]. Kinect motion tracking sensor system is a markerless system, and does not require any sophisticated calibration. The markerless system allows us to estimate motion more accurately than other motion capture systems, where there is a possibility for inaccurate marker placement. Instead of solving complex inverse kinematics to estimate joints angles, we proposed a simple complete geometric solution to find joint’s angles. This geometric solution allows quick estimation of joint angle and needs less computational power.We used the NAO robot to demonstrate upper limb exercises by mimicking the actor’s motion. The humanoid robot NAO can communicate verbally. Verbal communication provides a ground for social interaction and share emotions [28, 29]. NAO robot shows promising usability in the fields of pediatric rehabilitation, socially assistive robotics, human motion imitation and learning artificial intelligence [30, 31, 32, 33, 34].

2 Humanoid Robot NAO

For a demonstration of telerehabilitation, we have used a humanoid robot NAO due to its human-like (a) appearance; (b) acrobatic movement capability; and (c) communication capability using vision sensors, speech sensors, hearing sensors, and touch sensors. All these features make NAO one of the most promising autonomous programmable robot [35]. NAO was developed by Aldebaran Robotics. It has 25 joints, two cameras, four microphones, two loudspeakers, nine tactile sensors, and eight pressure sensors. It is 574 mm tall and 275 mm in width. NAO robot has two joints in the head (neck), five joints in each arm along with hand close and open motions, five joints in each leg and two hip joints. All these sensors and actuators make NAO capable of performing all simple human motions such as walking forward and backward; sitting down and standing up; waving hand; playing soccer, etc. In this study, we focused on upper extremity rehabilitation scheme; therefore, throughout the study, we have focused on the upper arm (right) of the NAO. NAO has its own central processing unit (CPU) to execute command and control all the sensors. Its motherboard consists of Intel ATOM 2530 1.6 GHz processor with cache memory 512KB, clock speed 1.6 GHz and FSB speed 533mHz, 1GB RAM, 2GB flash memory and 8GB Micro SDHC. Also, it provides communication through WiFi using IEEE 802.11 a/b/g/n protocol, Ethernet, and USB interface. Its USB port mainly used for updating the robot operating system. For advanced programming of NAO robot. Aldebaran provides API for language C++, Python and Java for NAO V5. Python SDK for the NAO robot requires Python 2.7-32 bits. It only works for this environment (Python 2.7-32 bits). C++ SDK requires a C++ compiler for different OS. For Windows, C++ SDK only works for Visual Studio 2010. Windows also have some limitations; there is no C++ cross compiler. As a solution to this problem, Windows users need to install qiBuild, a tool design to generate cross-platform projects using CMake along with the C++ SDK. In our study, we have used Python 2.7.13 and Python SDK to control the NAO for advanced functionality.

Figure 1 Active joints of NAO robot [36]
Figure 1

Active joints of NAO robot [36]

3 Human Motion Tracking Device

The most important thing in teleoperation is to track the human upper limb motion. For this purpose, we use the markerless vision-based tracking device Kinect. Kinect sensor developed by Microsoft, has a depth sensor, a color camera, and a four-microphone array. It can track the full body of a person. Due to its low price, Kinect is used for different applications in different fields. Microsoft Kinect is equipped with an IR depth sensor, one RGB camera, and a four-microphone array. Using depth sensors, it can produce a 3D depth image with regulation 512x424 at 30 FPS (frame per sec) and range 0.5 to 8 meters. It provides light independentinfrared and able to remove the ambient light effect. RGB camera provides full HD 1920 x 1080 regulation. It also provides either 30 or 15 FPS based on lighting. The most amazing feature of the Kinect sensor is that it can track a fully human body. It can track 25 joints of the human body with clear identification of hand states whether it is open or close. Body joint tracking means that it provides the coordinates of each joint in 3D space. It can track a total of six-persons simultaneously in front of it within a range of 0.5 to 4.5 meters. Through the microphone, it also can detect sound direction. Field of view of its depth camera is 70 in horizontal and 60 in the vertical direction. To access the Kinect data, we used the Kinect V2 SDK. This SDK provides the necessary API to extract Kinect data using Visual Studio 2012 or higher. It is (SDK) compatible with programing language C++, C#, Java. In this research, we have extracted Kinect data to analyze human arm kinematics and also to teleoperate the NAO (in real-time) to establish the proposed telerehabilitation scheme. As mentioned earlier Kinect can track 25 joints of the human body. However, in this study, our focus was on the human upper extremity. To serve our purpose, we tracked only 11 joints of human upper extremities that include shoulder, elbow and wrist joint of each arm along with the head, neck, spine shoulder, spine mid and spine base joint.

4 Human Upper Arm Kinematics

In the proposed telerehabilitation scheme human right arm movement was considered to actuate the NAO’s right arm. To replicate NAO’s arm movement, human upper arm was modeled as a two degrees of freedom (DoF) shoulder joint (to provide abduction/adduction, and vertical flexion/extension motion), and a one DoF elbow joint (to provide flexion/extension motion). To develop the kinematic model human right arm’s kinematics was analyzed using the modified DH convention [38]. The coordinate frame attachment for this analysis is depicted in Figure 4.

In this model, Joint 1 represents shoulder abduction/adduction, joint 2 corresponds to shoulder vertical flexion/extension, and joint 3 depicts the flexion/extension of the elbow joint. Distance between the shoulder joint and the elbow joint is d1 and that is between the wrist and the elbow joint is d2. The modified DH parameters for the human upper arm are summarized in Table 1. where αi−1 is the link twist, ai−1 corresponds to link length, di stands for link offset, and θi is the joint angle of Human Arm.

Table 1

Modified Denavit-Hartenberg parameters

Joint (i) αi−1 di ai−1 θi
1 0 0 0 θ1-π/2
2 π/2 0 0 θ2
3 0 d1 0 θ3
4 π/2 d2 0 0

The general form of a link transformation that relates frame {i} relative to the frame {i − 1} is can be expressed as [38]:

(1) i i 1 T = i i 1 R 3 × 3 i i 1 P 3 × 1 0 1 × 3 1

where, ii1 R is the rotation matrix that describes the frame {i} relative to frame {i − 1} and can be expressed as:

(2) ii1R=[ cosθisinθi0sinθicosαi1cosθicosαi1sinαi1sinθisinαi1θisinαi1cosαi1 ]

moreover, ii1 P is the vector that locates the origin of the frame {i} relative to the frame {i − 1} and can be expressed as:

(3) ii1P=[ ai1sαi1dicαi1di ]T

The homogenous transformation matrix that relates frame {4} to frame {0} can be obtained by multiplying individual transformation matrices.

(4) 40T=[ 10T.21T32T.43T ]=[ 40R3×340P3×101×31 ]

The single transformation matrix thus found from equation (4) represents the positions and the orientations of the reference frame {4} attached to the wrist joint (Figure 4) with respect to the fixed reference frame {0}. The equation (4) is known as the forward kinematics equation. If the joint variables are (θ1, θ2, θ3, and θ4) known then the position of the last frame {4} and/or end-effector position with respect to base frame {0} can be determined from the forward kinematics equation.

For telerehabilitation purpose, teleoperator’s upper-limb joint variables (θ1, θ2, and θ3) were computed in real-time from the Kinect coordinate data (see Figure 3b). More details on this were discussed in the next section. Note that, NAO’s upper arm range of motion (ROM) differs compared to that of the human upper arm. For instance, NAO’s elbow joint ROM is around 60 less (see Table 2) compare to that of the human elbow joint (θ3). In addition, NAO’s shoulder joint roll (abduction/adduction, θ1) motion ROM is smaller compared to that of human shoulder joint motion. On the other hand, its shoulder pitch (vertical flexion/extension, θ2) ROM is larger compared to that of human shoulder joint pitch motion. Human upper-arm joints motions are depicted in Figure 5.

Figure 2 Kinect Sensor (reference coordinate attached)
Figure 2

Kinect Sensor (reference coordinate attached)

Figure 3 (a) Name of the tracked joints using Kinect [37]; (b). Kinect coordinate frames used in telerehabilitation
Figure 3

(a) Name of the tracked joints using Kinect [37]; (b). Kinect coordinate frames used in telerehabilitation

Figure 4 Coordinate frame assignment for Human upper arm
Figure 4

Coordinate frame assignment for Human upper arm

Figure 5 General motion. (a) Shoulder Flexion/Extension (shoulder pitch); (b) Shoulder Abduction/Adduction (shoulder roll) and (c) Elbow Flexion/Extension (elbow roll)
Figure 5

General motion. (a) Shoulder Flexion/Extension (shoulder pitch); (b) Shoulder Abduction/Adduction (shoulder roll) and (c) Elbow Flexion/Extension (elbow roll)

Table 2

Upper Arm Range of Motion of NAO Robot and Human

Joint Name Motion Range for NAO (degrees) Range for human (degrees) [39]
RshoulderPitch θ2 −119.5 to 119.5 −150 to +30
RshoulderRoll θ1 −76 to 18 −50 to +180
RElbowRoll θ3 2 to 88.5 0 to +150

5 Control Scheme

To control NAO’s right arm movement (remotely), the Kinect sensor was used to generate the control signal. This sensor can track human motion in front of it. Note that, Kinect sensor provides Cartesian coordinate information of the human upper arm joints (shoulder, elbow, and wrist), whereas, human upper arm joint angles are required to teleoperate the NAO. Figure 6 shows the schematic of the control approach where it shows human arm joint angles were used as a reference input to the NAO’s controller. The geometric approach was used to compute human arm joint angles from Kinect data.

Figure 6 Schematic of the control approach
Figure 6

Schematic of the control approach

5.1 Geometric Calculation of Joint Angles

As stated earlier, Tele-operators upper arm joint coordinates (i.e., the coordinates of the shoulder (xsk , ysk , zsk), elbow (xek , yek , zek), and wrist (xwk , ywk , zwk) joints) in 3D space with respect to the Kinect’s reference coordinate frame (see Figure 2) can be directly obtained from Kinect. However, we need to compute the joint angles from the co-ordinates data. Note that, the orientation of the frame {0} (teleoperator’s shoulder joint frames, see Figure 4) is identical to the Kinect coordinate frame (see Figure 2). The co-ordinate of the elbow (frame {3}) and the wrist (frame {4}) joint with respect to frame shoulder frame {0}, {1} or {2} thus can be found by subtracting shoulder joint coordinate from the elbow and the wrist joint coordinates which were obtained from Kinect sensor.

Therefore, for a known shoulder joint co-ordinate (xsk , ysk , zsk), the elbow and wrist joint coordinates with respect to the shoulder joint of the teleoperators can be found as follows:

  • Elbow joint coordinates with respect to the shoulder frame:

x e , y e , z e = x e k x s k , y e k y e k y s k , z e k z s k ,

and

  • Wrist joint coordinates with respect to the shoulder frame:

x w , y w , z w = x w k x s k , y w k y s k , z w k z s k .

Let us consider a vector OA along Z0 axis is in a negative direction with unit length (see, Figure 7). Then the point A locates at (0, 0,−1) position. Now, the angle between the d1 vector OA is the angle θ2. Also, consider a vector OB along the X0 axis is in a positive direction with unit length. Then the point B locate at (1, 0, 0) position. Therefore, the angle between OB and d1 is the angle θ1. then,

Figure 7 Joint angle θ1 and θ2 calculation
Figure 7

Joint angle θ1 and θ2 calculation

cosθ1=((10)2+(00)2+(00)2)2+(d1)2((xe1)2+(ye)2+(ze)2)22d1.

and, sinθ1=±1(cosθ1)2. θ1=tan1(±sinθ1cosθ1).

therefore, Similarly,

cosθ2=((00)2+(00)2+(10)2)2+(d1)2((xe)2+(ye)2+(ze+1)2)22d1.

and, sinθ2=±1(cosθ2)2.

therefore, θ2=tan1(±sinθ2cosθ2).

Finally, from the triangle shown in Figure 8, θ3 can be found using the cosine rule.

Figure 8 Joint angle θ3 calculation
Figure 8

Joint angle θ3 calculation

d2=d12+d222d1d2cosθ3.

then, θ3=cos1(d12+d22d22d1d2).

To align NAO’s upper arm’s joints axes of rotation with that of the human upper arm, it is required to orient NAO’s joints axes of rotation, which is summarized below:

θ1,NAO,RShoulderRoll = θ1 − 90

θ2,NAO,RShoulderPitch = θ2

and, θ3,NAO,RElbowRoll = 180 − θ3

For the convenience of representation, three joint angles of NAO are represented as θ1, θ2 and θ3 throughout this paper.

5.2 Control Architecture

Computation of joints angle from the Kinect data was performed in the host PC in the MATLAB environment. The angle values are then sent to the NAO’s operating system as input information to the NAO’s controller where a built-in position controller was used to maneuver the NAO to follow the desired joint angles (i.e., in this case, upper arm joint angles). Note that, to transfer the MATLABoutput (i.e., joint angles) to NAO, Python programming language was used along with Python SDK. In this project, a TCP/IP socket was used to establish communication between the MATLAB and the Python. For this, a TCP/IP object was created in both MATLAB and Python with the same IP address and port number.MATLAB sends the angle value to python through this TCP/IP object and after receiving the angel values Python sends the motor command to NAO robot by calling ALmotion modules in NAOqi [40] Then NAOqi sends the control signal to the NAO’s joint motor. NAOqi uses linear interpolation method to generate the motion trajectory between two successive motor commands.

6 Performance Analysis

To analyze the performance of the developed teleoperation scheme, some basic experiments were conducted on NAO robot, where a healthy adult subject (age: 25, height: 5.4ft, mas: 65Kg) performed following single-joint movement exercises in front of the Kinect to teleoperate the NAO robot in real-time.

  1. Shoulder abduction/adduction (NAO robot RShoulderRoll, θ1),

  2. Shoulder vertical flexion/extension(NAO robot RShoulderPitch, θ2),

  3. Elbow flexion/extension (NAO robot RElbowRoll, θ3) .

The experimental results are shown below.

In our first teleoperation experiment, NAO’s shoulder abduction/adduction motion was controlled. Figure 9 shows the position of NAO’s upper arm and that of teleoperator’s upper arm at different instances while performing shoulder abduction/adduction motion. As shown in Figure 10, the motion began from −10, and ended at −74 (i.e., within the range of NAO’s joint ROM).

Figure 9 Arm position of the human operator and NAO robot during shoulder joint abduction/adduction teleoperation
Figure 9

Arm position of the human operator and NAO robot during shoulder joint abduction/adduction teleoperation

Figure 10 Shoulder joint abduction/adduction teleoperation. (a) Joint coordinate from Kinect; (b) Comparison between the joint angles found from Kinect, and NAO; (c) NAO’s joint angles from Kinect Co-ordinates data
Figure 10

Shoulder joint abduction/adduction teleoperation. (a) Joint coordinate from Kinect; (b) Comparison between the joint angles found from Kinect, and NAO; (c) NAO’s joint angles from Kinect Co-ordinates data

Figures 10(a), and 10(c) show the Kinect coordinate data of the teleoperator’s upper arm joints (i.e., shoulder, elbow, and wrist joint). While performing shoulder joint abduction/adduction motion, the Z coordinate from Kinect frame of all joint should be constant which is matched with the tracked results (Figure 10(a)). Figures 10 (b) and 10(c) display the trajectory followed by a human operator and NAO robot. Motor position feedback from the NAOwas obtained through Device Communication Manager (DCM). There are two separate data come from NAO’s DCM; one set of data that comes from motor controller output, and goes to the motor for execution which is called motor command; and the other set of data that comes from the joint position sensor MRE (Magnetic Rotary Encoders) is called sensor reading. From Figure 10(b), it is obvious that there was some difference between these two data. The reason of this difference is when NAO moves any of its (upper arm) joints, the motion is interpolated between the current value of the motor position and the targeted value, and it takes some time for the command to go to the motor cards and take some time for sending the reading back. There was also some delay between the Kinect data and the motor command because these two programs run in two different platforms. Kinect extract data in MATLAB and Python communicate with NAO robot through Wi-Fi communication. MATLAB and Python communicate through TCP/IP communication. There is some delay between sending and receiving data between MATLAB and Python. Besides, NAO’s controller needs some time to execute all its current command to take the next command. To eliminate/reduce such delay all the platforms are needed to be synchronized. However, it is a little bit difficult when there are multiple platforms are used (in this case MATLAB, Python, NAOqi). Nevertheless, from the above results, it is obvious that the operator’s upper limb trajectory pattern was nicely followed by the NAO robot.

Figure 11 shows the upper arm position of the NAO and the human operator at different instances of shoulder joint vertical flexion/extension motion. The experimental results are shown in Figure 12 where it can be found that the motion starts at 90 and ends at 15. Note that, the actual range for this motion was higher than the range 90 to 15, but we limit this motion 90 to 15 for experimental purpose.

Figure 11 Arm position of human operator and NAO robot during shoulder joint vertical flexion/extension teleoperation
Figure 11

Arm position of human operator and NAO robot during shoulder joint vertical flexion/extension teleoperation

Figure 12 Shoulder joint flexion/extension teleoperation. (a) Shoulder joint flexion/extension teleoperation, (joint coordinate from Kinect). (b) Comparison between joint angles from Kinect and NAO robot. (c) Coordinate data and joint angles from Kinect and NAO robot
Figure 12

Shoulder joint flexion/extension teleoperation. (a) Shoulder joint flexion/extension teleoperation, (joint coordinate from Kinect). (b) Comparison between joint angles from Kinect and NAO robot. (c) Coordinate data and joint angles from Kinect and NAO robot

Figure 13 shows the hand position of the NAO robot and human operator at different instances of elbow joint flexion/extension motion. It can be seen from the Figure 14 that the elbow motion was started at 15 and ended at 85. Figure 14 (a) shows the coordinate data obtained from the Kinect sensor for the shoulder, elbow, and wrist joint. It is to be noted that, while performing elbow flexion/extension motion, the joint coordinates of the elbow and the shoulder joint (measured from the Kinect frame) should remain unchanged and only the wrist joint coordinates change with elbow flexion/extension motion. In addition, the X coordinate for wrist joint should be constant as the elbow motion is in YZ plane. Thus, in Figure 14 (a), it can be seen that all the red dots (proximal red dots-indicates) and black dots (indicates shoulder joint) were clustered at the same position throughout the trial.

Figure 13 Arm position of human operator and NAO robot during elbow joint flexion/extension motion during teleoperation
Figure 13

Arm position of human operator and NAO robot during elbow joint flexion/extension motion during teleoperation

Figure 14 Elbow flexion/extension teleoperation. (a) Skeleton with joint coordinate from Kinect, (b) comparison between joint angles from Kinect and NAO robot, (c) coordinate data and joint angles from Kinect and NAO robot
Figure 14

Elbow flexion/extension teleoperation. (a) Skeleton with joint coordinate from Kinect, (b) comparison between joint angles from Kinect and NAO robot, (c) coordinate data and joint angles from Kinect and NAO robot

Finally, we conduct a few experiments on multi-joint movements. The first multi-joint experiment was shoulder joint horizontal flexion/extension. This motion is a combination of shoulder joint vertical flexion/extension and abduction/adduction. The experimental results are shown Figure 15 and Figure 16. Our second multi-joint experiment was diagonal reaching which is a combination of three joints motion- shoulder vertical flexion/extension and abduction/adduction, and elbow flexion/extension. The results of this experiment are shown in Figure 17 and Figure 18.

Figure 15 Arm position of human operator and NAO robot during Shoulder joint horizontal flexion/extension motion during teleoperation
Figure 15

Arm position of human operator and NAO robot during Shoulder joint horizontal flexion/extension motion during teleoperation

Figure 16 Shoulder joint horizontal flexion/extension teleoperation. comparison between joint angles from Kinect and NAO robot
Figure 16

Shoulder joint horizontal flexion/extension teleoperation. comparison between joint angles from Kinect and NAO robot

Figure 17 Arm position of human operator and NAO robot for diagonal reaching motion during teleoperation
Figure 17

Arm position of human operator and NAO robot for diagonal reaching motion during teleoperation

Figure 18 Diagonal reaching teleoperation, comparison between joint angles from Kinect and NAO robot
Figure 18

Diagonal reaching teleoperation, comparison between joint angles from Kinect and NAO robot

From these experimental results, it is clear that there has a lagging between the human operator’s upper arm joint angle and that of the NAO robot. However, the pattern of the trajectory followed by the NAO robot was the same as that of the human operator. The objective of the developed control approach is to teleoperate the NAO robot according to the trajectory produced by a human operator. The experimental results evident that the developed control scheme can successfully be deployed to teleoperate the NAO. Therefore, we may conclude that using the developed control approach one can remotely control the NAO as long as they are

connected to the same network. Thus, a therapist can introduce/demonstrate new exercises remotely using the NAO robot.

7 Conclusion

We developed a successful telerehabilitation scheme in which a therapist can introduce new exercises to the patient (or a group of patients) remotely. By doing so, it can facilitate many patients in different places under rehabilitation as long as the NAO and Kinect sensor are connected under the same network. The motion tracking is easy and simple. In this research, a unique inverse kinematic solution was obtained to overcome the singularity problem. Studies show that patients often do not feel comfortable showing their disability to other people, or they do not feel good being in front of other people. The telerehabilitation would be a potential solution in such cases. Note that, this telerehabilitation scheme is the part of a high-level remote rehabilitation scheme in which we will utilize all the joints (5DoFs) of NAO robot and will introduce a remote communication strategy in which we will not require to connect the NAO and the Kinect in the same network.

References

[1] Benjamin EJ, Blaha MJ, Chiuve SE, et al. on behalf of the American Heart Association Statistics Committee and Stroke Statistics Subcommittee. Heart disease and stroke statistics—2017 update: a report from the American Heart Association. Circulation. 2017;135:e229-e445.10.1161/CIR.0000000000000491Search in Google Scholar

[2] Intercollegiate stroke working party. National clinical guideline for stroke. Technical report, London: Royal College of Physicians; 2012.Search in Google Scholar

[3] Annick A. A. Timmermans, Henk A. M. Seelen, Richard D.Willmann, Wilbert Bakx, Boris de Ruyter, Gerd Lanfermann & Herman Kingma (2009) Arm and hand skills: Training preferences after stroke, Disability and Rehabilitation, 31:16, 1344-1352, DOI: 10.1080/0963828090282366410.1080/09638280902823664Search in Google Scholar

[4] Winstein, C.J., Stein, J., Arena, R., Bates, B., Cherney, L.R., Cramer, S.C., Deruyter, F., Eng, J.J., Fisher, B., Harvey, R.L. and Lang, C.E., 2016. Guidelines for adult stroke rehabilitation and recovery: a guideline for healthcare professionals from the American Heart Association/American Stroke Association. Stroke, 47(6), pp.e98-e169.10.1161/STR.0000000000000098Search in Google Scholar

[5] Outpatient rehabilitation among stroke survivors–21 States and the District of Columbia, 2005. MMWR: Morbidity & Mortality Weekly Report, 2007. 56(20): p. 504-507.Search in Google Scholar

[6] Brainin, M., Y. Teuschl, and L. Kalra, Acute treatment and long-term management of stroke in developing countries. The Lancet Neurology, 2007. 6(6): p. 553-561.10.1016/S1474-4422(07)70005-4Search in Google Scholar

[7] M. Guidali, A. Duschau-Wicke, S. Broggi, V. Klamroth-Marganska, T. Nef, R.A. Reiner, A robotic system to train activities of daily living in a virtual environment, Medical & Biological Engineering & Computing 49 (10) (2011) 1213–1223.10.1007/s11517-011-0809-0Search in Google Scholar PubMed

[8] Krebs, H.I., et al., Robot-Aided Neurorehabilitation. IEEE transactions on rehabilitation engineering: a publication of the IEEE Engineering in Medicine and Biology Society, 1998. 6(1): p. 75-87.10.1109/86.662623Search in Google Scholar PubMed PubMed Central

[9] Lum, P.S., C.G. Burgar, and P.C. Shor, Evidence for improved muscle activation patterns after retraining of reaching movements with the MIME robotic system in subjects with post-stroke hemiparesis. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 2004. 12(2): p. 186-194.10.1109/TNSRE.2004.827225Search in Google Scholar PubMed

[10] Rahman, M.H., Rahman,M.J., Cristobal,O.L., Saad,M., Kenné, J. P., Archambault,P. S (2015). Development of a whole arm wearable robotic exoskeleton for rehabilitation and to assist upper limb movements. Robotica CJO 33(1), 19-39. doi:10.1017/S0263574714000034.10.1017/S0263574714000034Search in Google Scholar

[11] Feil-Seifer, D. and M.J. Matarić, Socially Assistive Robotics. IEEE Robotics & Automation Magazine, 2011. 18(1): p. 24-31.10.1109/ICORR.2005.1501143Search in Google Scholar

[12] A. Thobbi and W. Sheng, “Imitation learning of arm gestures in presence of missing data for humanoid robots”, IEEE-RAS Int. Conf. On Humanoid Robots, pp. 92-97, Nashville, TN, USA, 2010.10.1109/ICHR.2010.5686324Search in Google Scholar

[13] J. B. Cole, D. B. Grimes, and R. P. N. Rao, “Learning full-body motions from monocular vision: Dynamic imitation in a humanoid robot”, IEEE/RSJ Int. Conf. Intelligent Robots and Systems, pp. 241-246, 2007.10.1109/IROS.2007.4399578Search in Google Scholar

[14] F.Wang, C. Tang, Y. Ou and Y. Xu, ¨A Real-Time Human Imitation System¨, World Congress on Intelligent Control and Automation, pp. 3692-3697, Beijing, China, 2012.10.1109/WCICA.2012.6359088Search in Google Scholar

[15] W. Suleiman, E. Yoshida, F. Kanehiro, J.-P. Laumond, and A. Monin, “On human motion imitation by humanoid robot”, IEEE Int. Conf. Robotics and Automation, pp. 2697-2704, Pasadena, USA, 2008.10.1109/ROBOT.2008.4543619Search in Google Scholar

[16] J. Rosado, F. Silva, and V. Santos, “A Kinect-based motion capture system for robotic gesture imitation”, ROBOT13: First Iberian Robotics Conference, M.A. Armada et al. (Eds.), Advances in Intelligent Systems and Computing, vol. 252, pp. 585-595, Springer.10.1007/978-3-319-03413-3_43Search in Google Scholar

[17] Ningjia, Y., Feng, D., Yudi, W., Chuang, L., Tan, J.T.C., Binbin, X., Jin, Z.: A study of the human-robot synchronous control system based on skeletal tracking technology. In: IEEE International Conference on Robotics and Biomimetics (ROBIO) 2013, pp. 2191-2196Search in Google Scholar

[18] Riley, M., Ude, A., Wade, K., Atkeson, C.G.: Enabling real-time full-body imitation: a natural way of transferring human movement to humanoids. In: IEEE international conference on robotics and automation (ICRA’03) 2003, pp. 2368-237410.1109/ROBOT.2003.1241947Search in Google Scholar

[19] Afsar, Sevgi Ikbali, Ilkin Mirzayev, Oya Umit Yemisci, and Sacide Nur Cosar Saracgil. "Virtual Reality in Upper Extremity Rehabilitation of Stroke Patients: A Randomized Controlled Trial." Journal of Stroke and Cerebrovascular Diseases 27, no. 12 (2018): 3473-3478.10.1016/j.jstrokecerebrovasdis.2018.08.007Search in Google Scholar PubMed

[20] Pham, Tam N., Joshua N. Wong, Tonya Terken, Nicole S. Gibran, Gretchen J. Carrougher, and Aaron Bunnell. "Feasibility of a Kinect®-based rehabilitation strategy after burn injury." Burns 44, no. 8 (2018): 2080-2086.10.1016/j.burns.2018.08.032Search in Google Scholar PubMed

[21] Givon Schaham, Noa, Gabi Zeilig, Harold Weingarden, and Debbie Rand. "Game analysis and clinical use of the Xbox-Kinect for stroke rehabilitation." International Journal of Rehabilitation Research 41, no. 4 (2018): 323-330.10.1097/MRR.0000000000000302Search in Google Scholar PubMed

[22] Ma, Mengxuan, Rachel Profltt, and Marjorie Skubic. "Validation of a Kinect V2 based rehabilitation game." PloS one 13, no. 8 (2018): e0202338.10.1371/journal.pone.0202338Search in Google Scholar PubMed PubMed Central

[23] Lai, Chung-Liang, Chien-Ming Tseng, D. Erdenetsogt, Tzu-Kuan Liao, Ya-Ling Huang, and Yung-Fu Chen. "A Kinect-Based System for Balance Rehabilitation of Stroke Patients." IEICE TRANSACTIONS on Information and Systems 99, no. 4 (2016): 1032-1037.10.1587/transinf.2015CYP0016Search in Google Scholar

[24] Semblantes, Piedad A., Víctor H. Andaluz, Johana Lagla, Fernando A. Chicaiza, and Andrés Acurio. "Visual feedback framework for rehabilitation of stroke patients." Informatics in Medicine Unlocked 13 (2018): 41-50.10.1016/j.imu.2018.10.002Search in Google Scholar

[25] Eltoukhy, Moataz, Jeonghoon Oh, Christopher Kuenze, and Joseph Signorile. "Improved kinect-based spatiotemporal and kinematic treadmill gait assessment." Gait & posture 51 (2017): 77-83.10.1016/j.gaitpost.2016.10.001Search in Google Scholar

[26] Müller, Björn, Winfried Ilg, Martin A. Giese, and Nicolas Ludolph. "Validation of enhanced kinect sensor based motion capturing for gait assessment." PloS one 12, no. 4 (2017): e0175813.10.1371/journal.pone.0175813Search in Google Scholar

[27] Bakhti, K. K. A., I. Laffont, M. Muthalib, J. Froger, and D. Mottet. "Kinect-based assessment of proximal arm non-use after a stroke." Journal of neuroengineering and rehabilitation 15, no. 1 (2018): 104.10.1186/s12984-018-0451-2Search in Google Scholar

[28] Fong, T., I. Nourbakhsh, and K. Dautenhahn, A survey of socially interactive robots. Robotics and Autonomous Systems, 2003. 42(3): p. 143-166.10.1016/S0921-8890(02)00372-XSearch in Google Scholar

[29] Kiesler, S. Fostering common ground in human-robot interaction. in ROMAN 2005. IEEE International Workshop on Robot and Human Interactive Communication, 2005. 2005.10.1109/ROMAN.2005.1513866Search in Google Scholar

[30] Yavşan, Emrehan, and Ayşegül Uçar. "Gesture imitation and recognition using Kinect sensor and extreme learning machines."Measurement 94 (2016): 852-861.10.1016/j.measurement.2016.09.026Search in Google Scholar

[31] Michieletto, Stefano, Elisa Tosello, Enrico Pagello, and Emanuele Menegatti. "Teaching humanoid robotics by means of human teleoperation through RGB-D sensors." Robotics and Autonomous Systems 75 (2016): 671-678.10.1016/j.robot.2015.09.023Search in Google Scholar

[32] Dajles, D., and F. Siles. "Teleoperation of a Humanoid Robot Using an Optical Motion Capture System." In 2018 IEEE International Work Conference on Bioinspired Intelligence (IWOBI), pp. 1-8. IEEE, 2018.Search in Google Scholar

[33] Martí Carrillo, Felip, Joanna Butchart, Sarah Knight, Adam Scheinberg, Lisa Wise, Leon Sterling, and Chris McCarthy. "Adapting a general-purpose social robot for paediatric rehabilitation through in situ design." ACM Transactions on Human-Robot Interaction (THRI) 7, no. 1 (2018): 12.10.1145/3203304Search in Google Scholar

[34] Chen, Jianxin, Guanwen Wang, Xiao Hu, and Jiayun Shen. "Lower-body control of humanoid robot NAO via Kinect." Multimedia Tools and Applications 77, no. 9 (2018): 10883-10898.10.1007/s11042-017-5332-3Search in Google Scholar

[35] D. Gouaillier et al., "Mechatronic design of NAO humanoid," 2009 IEEE International Conference on Robotics and Automation, Kobe, 2009, pp. 769-774.doi: 10.1109/ROBOT.2009.515251610.1109/ROBOT.2009.5152516Search in Google Scholar

[36] NAO Anatomy: Sensing and Movement on Your Robot https://www.robotlab.com/support/nao-anatomy-sensing-and-movement-on-your-robot. Accessed 18 Feb 2019Search in Google Scholar

[37] Yassine Bouteraa, Ismail Ben Abdallah, (2017) "A gesture-based telemanipulation control for a robotic arm with biofeedback-based grasp", Industrial Robot: An International Journal, Vol. 44 Issue: 5, pp.575-587, https://doi.org/10.1108/IR-12-2016-0356https://doi.org/10.1108/IR-12-2016-0356Search in Google Scholar

[38] Craig, J. J. (2005). Introduction to Robotics: mechanics and control. Upper Saddle River, N.J., Pearson/Prentice Hall.Search in Google Scholar

[39] Winter, D. A. (1990). Biomechanics and motor control of human movement, 2nd ed. New York: J. Wiley, xvi, 277 p.Search in Google Scholar

[40] Softbank Robotics Documentation: NAOqi, Online: http://doc.aldebaran.com/2-5/index_dev_guide.html Accessed: 05/09/2019Search in Google Scholar

Received: 2019-05-09
Accepted: 2019-12-01
Published Online: 2020-07-28

© 2020 Md Assad-Uz-Zaman et al., published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 27.5.2024 from https://www.degruyter.com/document/doi/10.1515/jisys-2019-0126/html
Scroll to top button