5.7–18.4 years, and longitudinal=223 aged
8.4–21.3 years). Japan
The methodological quality of the one randomized trial was based on the Consolidated Standards of Reporting Trials (Consort) strategy, which contains a checklist with 25 items, divided into: title and abstract (one item with two sub-items); introduction (one item with two sub-items); methods (five items) and a topic with information about randomization (five items); results (seven items); discussion (three items); and other information, such as registration, protocols and funding (three items). 9 , 10 Each item, if met, equals 1 point, and they were all added up according to the analysis of the papers. The score of methodological quality of this randomized trial is shown in Table 1 .
In order to synthesize the description of characteristics as main results and descriptive approach, the following information was extracted from each selected article: name of the main author, year of publication, country where the study was performed, design, sample size, type of technology evaluated, statistical variables, main results, and limitations.
Searches on PubMed and VHL using the descriptors “internet”, “child” and “growth and development” retrieved 550 articles. After applying inclusion criteria, 221 studies were selected and, after reading the titles and abstracts, 125 were excluded. 92 articles were read in full and, per the inclusion criteria and a detailed analysis, four studies were selected. Four other articles were included after an additional search in the reference list of primarily selected articles; the studies should have the same inclusion criteria defined in the methodology. Thus, eight articles made up the sample. The flowchart is shown in Figure 1 .
Most studies were epidemiological. Almost all of them were observational (n=7), and only one was an intervention study. The observational studies included were longitudinal and/or cross-sectional (n=5), case-control (n=1) and cohort studies (n=1). Only one experimental study was included, a randomized controlled trial (n=1), as shown in Table 1 .
Their methodological quality was based on their scores ( Table 1 ). Most studies were observational (n=7) and, therefore, were evaluated according to the Strobe criteria 7 . The score ranged from 17 to 22, and most articles reached 20 points (n=4), which is good methodological quality. The quality of the randomized trial with 18 points—according to the Consort 2010 criterion, which has a maximum score of 25—was also considered good. 9
The main results about the implications of technology in childhood are detailed in Tables 2 and and3 3 .
Authors (year) | Media type | Main results |
---|---|---|
Takeuchi et al. (2018) | Internet | Higher frequency of internet use was associated with decreased verbal intelligence and smaller increases in brain volume after a few years. The areas of the brain affected are related to language processing, attention, memory, and executive, emotional and reward functions. |
Slater et al. (2017) | Games (Internet) | Internet games that focus on appearance can be harmful to girls’ body self-image. |
Folkvord et al. (2017) | Games ( ) | Advertising games (advergames) encourage the consumption of unhealthy foods. |
Slater et al. (2016) | Television | Children are able to absorb or internalize social messages about sexualization, illustrated in the study as the desire for sexualized clothing. Internalizations had a negative impact on their body self-image. |
Takeuchi et al. (2016) | Games ( ) | Playing video games for long periods can cause direct or indirect interruption in neural systems’ development, which can be related to an unfavorable neurocognitive development, especially verbal intelligence. |
Takeuchi et al. (2015) | Television | Watching television affects the regional volume of the brain associated with verbal language. TV watching time was negatively correlated with verbal intelligence quotient. It can indirectly affect sensorimotor areas. |
Authors (year) | Media type | Main results |
---|---|---|
McNeill et al. (2019) | Television, Games, Apps | Use of electronic applications for less than 30 minutes a day and limited media viewing could be associated with cognitive and psychosocial development of preschool-age children. |
Yu and Park (2017) | Internet | Use of internet to socialize, exchange ideas and talk about concerns. An opportunity to socialize and make friends. |
After reading and analysis, the articles were classified and distributed into two categories according to their approach: negative aspects (n=6) and positive aspects (n=2). The review results are reported below.
Six of the studies linked technologies to negative aspects. The papers highlitghed intellectual complications, 3 , 11 , 12 body image dissatisfaction 13 , 14 and encouragement of unhealthy food consumption. 15 Table 2 shows the main information.
Excessive internet use is transversally associated with lower cognitive functioning and reduced volume of several areas of the brain. In longitudinal analyses, a higher frequency of internet use was associated with a decrease in verbal intelligence and a smaller increase in the regional volume of gray/white matter in several brain areas after a few years. These areas relate to language processing, attention and executive functions, emotion and reward. 3
In a study conducted with 80 British girls aged 8 and 9 years, appearance-focused games led participants to have a greater dissatisfaction with their appearance compared to control girls, who were not exposed to such games. Therefore, internet games that address appearance can be harmful to girls’ body self-image. 13
It’s not just appearance-focused games that have a negative impact on body image. TV shows, depending on the approach, can also impact negatively psychological development. In a study with Australian girls, some TV shows aimed for the age group of 6-9 years focused on sexualization were absorbed or internalized as social messages by children. The authors stated that the exposure made these girls whish to wear sexualized clothes and create negative relationship with their body image. 14
Furthermore, a study with 562 Dutch and Spanish children reported that, among Dutch children, games with advertisements (advergames) for high-calorie foods stimulated the consumption of unhealthy foods, while those who played other games with advertisements other that food-related, were less inclined to this eating habit. 15 Thus, depending on what the child is exposed to, some influences may not be beneficial.
Video games were associated with increased mean diffusivity in cortical and subcortical areas. That is, prolonged video game use was associated with negative consequences, as it can directly or indirectly interrupt the development of neural systems and cause unfavorable neurocognitive development, especially when it comes to verbal intelligence. 11
Another study on children’s exposure to television, identified a negative effect on the gray matter of the frontal area of the brain with consequences for verbal language. No changes were identified in sensorimotor areas as related to TV watching time; the effect may not be direct, since watching this media is often associated with less physical activity, which, in turn, causes changes in the volume of gray matter in sensorimotor areas. 12
Only two studies brought the positive aspects of technology use, related to cognitive and psychosocial development 16 and forms of interpersonal relationships. 17 Main information is shown in Table 3 .
Associations of electronic media use with psychosocial development and the executive function among 3- and 5-year-olds, particularly related to total screen time, TV shows viewing, and application use were assessed by the authors, who concluded that cognitive and psychosocial development in children 12 months later was positive when exposure to these media lasted less than 30 minutes a day. 16
In a study conducted with 2,840 students in South Korea, children with depressed mood were more likely to use the internet to socialize, exchange ideas and talk about their concerns as a way to meet their friendship needs. The Internet can be beneficial for children, who can take advantage of online opportunities for socialization and friendships based on common interests. 17
The studies analyzed, in general, show that children currently spend a significant amount of time on the Internet or other means of information, and consider that this exposure can have positive and negative impacts on children’s cognitive development and learning skills.
As for the negative impacts of this habit in childhood, the higher frequency of internet use is associated with a significant decrease in verbal intelligence, mainly related to language skills and concentration/attention abilities. One study reported frequent internet use by children as related to decreased memory performance. 18
Another issue that must be taken into account is the number of games emerging all the time with new elements of fun and entertainment to attract children. An alert should be raised, however, about destructive websites such as the Blue Whale Challenge, which target vulnerable children and young people, threaten their physical integrity and are completely unethical, leading to the gradual destruction of society. 19
On the other hand, researchers have identified, among the most frequent purposes in allowing children access technology declared by parents, the promotion of problem-solving skills (56.7%), learning of basic mathematics (53.8%), developing hand-eye coordination (46.2%), introduction to reading (51%), language (47.1%) and science (26%), as well as entertainment (56.7%). 20
Based on the studies selected, we point out an unexpected result for parents: the problematic use of electronic devices at an early age can have children show low levels of openness to experiences, increasing the level of emotional instability, impulsive or other behaviors related to attention. Then, we must reinforce that exposure to media must be carefully pondered by parents and guardians as to avoid media dependence and misuse.
Problematic internet use (PIU) is associated with less openness and agreeableness, as children with higher levels of PIU end up with a deficit in social skills and difficulties in establishing interpersonal relationships, which can lead to being less open and visible, or less friendly externally. It was also found that these children tend to experience negative emotions and use the internet as a means of feeling better about their everyday problems or unpleasant feelings. Relationships were also between problematic video game use and behavior problems, specifically related to thoughts, attention, and aggressive behavior. 21
In order to bypass the negative effects of inappropriate use of the internet, one cannot ignore, on the one hand, the positive side of these technologies. Technology is extensively available and it is almost impossible to remove it from children’s daily lives. 22 But the negative effects mentioned during the discussion deserve the same attention, as the authors place parental control and moderation as key factors. 23 In this sense, there is a directly proportional link between parental participation and attention and a less harmful relationship between children and technologies, especially regarding social factors. 24
Currently, children spend their lives immersed in the world of digital media, and research has consistently shown the growing, early and diversified use of this media. Children exposed to electronics tend to develop a desire for continued use, creating a potentially harmful cycle. Even more worrisome are the effects of digital media on young children by disrupting parent-child interaction, which is critical to a healthy emotional and cognitive development. 25
There are potential benefits of digital technology as a tool to enhance early childhood development, creativity and social connection, but it is imperative that parents monitor what their children are consuming and help them learn from it. 26
A review of the literature about media reported an adverse association between screen-based media consumption and sleep health, mainly due to delays in bedtime and reduced total sleep duration. The underlying mechanisms of these associations include:
There is, therefore, and evident need to identify the warning signs of excessive technology use in this age group and define the appropriate limit of daily screen time. Children can make a balanced use of technologies, taking advantage of them without exaggeration, favoring communication and the search for information that is relevant to learning.
It is important to emphasize that pre-judgments about technology-dependent children should be avoided, and knowing their feelings about themselves, as well as the factors that bother them, is important, as well as having a sensitive listening to form a vision of ideal approach in this condition of technology dependence by means of suggested strategies to effectively face these difficulties. 28
Although this review has important and interesting results, some limitations must be listed. First, there the number of studies identified with the criteria of our work was limited. Also, most of the studies were observational. Therefore, experimental research must be carried out as a means to understand the cause-consequence dynamics between media and their implications for child development. Further studies with larger samples and specific age groups, which would be relevant to increase statistical power, are needed.
The analysis of the articles showed positive and negative factors associated with the use of technologies by children. The main losses caused by technology use in childhood are excessive time connected to the internet, worsening of mental health, and changes in the circadian rhythm. The articles mentioned as negative factors the development of intellectual impairments, including verbal intelligence and attention, emotional instability, internet addiction, binge eating and physiological changes.
The main benefits of the use of technologies by children found were the strengthening of friendships and the possibility of greater social connection. For the preschool age group, there is evidence of improvement in cognitive and psychosocial development. Thus, in order to have technology as an ally for healthy child development, parents and guardians should limit the time of use and control the type of content seen and shared by children.
Currently, preventing internet use is an unrealistic measure, since parents and guardians also make great use of technologies. However, because of the new settings imposed by the COVID-19 pandemic, many services have moved towards digitization, including education and social interaction. Internet use nowadays is a reality for all age groups and makes this study relevant; measures aimed at optimizing its use and reducing risks must, therefore, be adopted. Once again, we emphasize the importance of parents and guardians as moderators and update training of health professionals to better guide them.
Further studies are suggested so the notion of risk-benefit of internet use and its long-term consequences for child development is kept up to date.
The study did not receive any funding.
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
Scientific Reports volume 14 , Article number: 18318 ( 2024 ) Cite this article
131 Accesses
Metrics details
The use of observed wearable sensor data (e.g., photoplethysmograms [PPG]) to infer health measures (e.g., glucose level or blood pressure) is a very active area of research. Such technology can have a significant impact on health screening, chronic disease management and remote monitoring. A common approach is to collect sensor data and corresponding labels from a clinical grade device (e.g., blood pressure cuff) and train deep learning models to map one to the other. Although well intentioned, this approach often ignores a principled analysis of whether the input sensor data have enough information to predict the desired metric. We analyze the task of predicting blood pressure from PPG pulse wave analysis. Our review of the prior work reveals that many papers fall prey to data leakage and unrealistic constraints on the task and preprocessing steps. We propose a set of tools to help determine if the input signal in question (e.g., PPG) is indeed a good predictor of the desired label (e.g., blood pressure). Using our proposed tools, we found that blood pressure prediction using PPG has a high multi-valued mapping factor of 33.2% and low mutual information of 9.8%. In comparison, heart rate prediction using PPG, a well-established task, has a very low multi-valued mapping factor of 0.75% and high mutual information of 87.7%. We argue that these results provide a more realistic representation of the current progress toward the goal of wearable blood pressure measurement via PPG pulse wave analysis. For code, see our project page: https://github.com/lirus7/PPG-BP-Analysis
Introduction.
The COVID-19 pandemic has highlighted the acute need for technology to support remote health care 1 , 2 . Consultancy McKinsey 3 reported a 40-fold increase in the use of telehealth services and a 40% increase in consumer interest in virtual health solutions when compared to pre-COVID-19 statistics. To provide an example, the ability to estimate vital signs from sensors available in smartphones and wearable devices could have a significant impact on the effective management of diseases (e.g., COVID-19, hypertension, diabetes). Frequent measurement of physiological parameters can help in managing medication dosages and understanding the effects of lifestyle changes on health.
The estimation of vital signs traditionally relies on customized sensors that measure physical or chemical properties of the body. For example, digital sphygmomanometers use sensors to measure the oscillations in the arteries to quantify blood pressure. Although accurate, such medical devices are far from ubiquitous, often are not easy to access and are uncomfortable to use for extended periods of time. An alternative approach, promoted by the field of ubiquitous computing is to leverage sensors already present in every day devices for estimating health parameters. For example, heart rate can be measured using a smartphone camera by analyzing subtle changes in skin color as the heart pumps blood around the body 4 , 5 . This technology is now available on billions of devices, Google Fit ( https://www.google.com/fit/ ). Recent work has presented proof-of-concept measurement of oxygen saturation 6 , blood pressure 7 , and hemoglobin levels 8 via smartphones.
Existing research work can be broadly divided into two categories: (1) approaches that are developed from first principles to imitate an established medical method for measurement or diagnosis 9 , 10 , and (2) approaches where input (sensor) data and corresponding gold-standard data are collected using a medical grade device and machine learning models are trained to discover a relationship between the input and output 11 , 12 . In this paper, we focus on the latter category. Although well-intentioned, such data-driven approaches ignore a principled analysis of whether the input data have the necessary information to predict the desired health measure. As a result, numerous human and compute hours are wasted in developing and training deep learning models for prediction tasks that may be ill-posed or not feasible.
We consider the task of predicting blood pressure (BP) non-invasively. Blood Pressure is the pressure applied on arterial walls as the blood circulates through the body. It depends on multiple factors, including blood volume, blood viscosity, and stiffness of blood vessels. Abnormally high or low blood pressure can result in heart attack, stroke, and diabetes 13 , 14 thus it is recommended to measure BP frequently.
The methods to measure blood pressure non-invasively can be broadly categorized into two approaches: (i) The pulse transit time (PTT) method 15 , 16 , 17 is a popular, non-invasive technique for measuring blood pressure based on the time delay for a pressure wave to travel between proximal and distal arterial sites. The PTT approach has strong theoretical underpinnings based on the Bramwell-Hill equation 18 , which relates PTT to pulse wave velocity and arterial compliance. The Wesseling model captures the relationship between arterial compliance and blood pressure 19 . However, it is important to note that, PTT can change independently of BP due to factors such as aging-induced arteriosclerosis, and smooth muscle contraction. Hence, it needs to be calibrated from time to time. (ii) Pulse Wave Analysis (PWA) is a method used to estimate blood pressure (BP) by extracting features from an arterial waveform. This is typically performed using a photoplethysmography (PPG) waveform. PPG is an optical signal obtained by illuminating the skin (common sites are the finger, earlobe, or toe 20 ) with an LED and measuring the amount of transmitted, or reflected, light using a photodiode. PPG detects blood volume changes in the microvascular bed of tissue, as the blood volume directly impacts the amount of light transmitted/reflected. Unlike PTT, PWA has weaker theoretical underpinnings as the small arteries interrogated by PPG are viscoelastic 15 . Calibration is invariably necessary for PWA analysis methods to obtain reasonable results.
In this study, we concentrate on PWA measurement of BP. This method is beneficial because it only requires the use of a single sensor making it a more accessible solution. Predicting BP by analyzing PPG waveforms is an active area of research 7 , 21 , 22 , 23 , 24 , 25 and is already used in consumer products ( https://www.samsung.com/global/galaxy/what-is/blood-pressure/ ). However, we should note that “ while these methods (PTT and PWA) have been extensively studied and cuff-calibrated devices are now on the market, there is no compelling proof in the public domain indicating that they can accurately track intra-individual BP changes ” 20 , 26 . Therefore, although the features extracted from the PPG signal correlate with blood pressure, the signal’s adequacy for accurately predicting blood pressure remains unclear.
The discrepancy between recent research 27 , 28 , 29 claiming promising results on evaluation benchmarks for blood pressure, and other observational studies 20 , 26 which indicate a lack of a concrete theory to measure blood pressure using PPG signals via PWA, raises important questions. To help resolve this apparant contradiction, we conduct a comprehensive examination of the existing PWA techniques in the literature (Table 1 ). Our analysis reveals that a significant portion of the prior papers contain one or more of four common pitfalls: (a) Data Leakage: where data samples from the same patient are present in both the train and test sets, (b) Overconstraining: where data far from normal range is discarded as outliers, which statistically simplifies the task, (c) Unreasonable Calibration: where the calibration method is not tested over longer (e.g.,> 1 day) time scales, and (d) Unrealistic Preprocessing: which filters a significant portion of the dataset terming it as noisy. We analyze these pitfalls in detail in our results section.
Our analysis reveal a somewhat surprising lack of improvement (modulo the pitfalls above) in PPG-based blood pressure prediction. This is in contrast to the substantive improvements in non-invasive prediction of other vitals such as heart-rate during this time. This raises the question as to whether there is a limit/ceiling on the prediction accuracy. In order to answer this, we propose tools to examine whether an input sensor signal ( x ) (e.g., PPG) can be a good predictor of the output health label ( y ) (e.g., BP). For this, we want to evaluate whether an underlying function f exists, which captures the relationship between x and y , such that \(y=f(x)\) . We also want to measure the conditioning of this underlying function, and check whether it is well-conditioned or not? That is, whether small changes in x lead to small or large changes in y . It is important to ensure that (minor) noise in the sensor measurement (which is inevitable in a real-world setting) does not lead to significant error in the outputs. Our tool is based on information-theoretic notions of mutual information and multi-valued mappings . Using our proposed tool, we find that BP prediction using PPG has a high multi-valued mapping factor of 33.2% and low mutual information of 9.8%. In comparison, heart rate prediction using PPG, a well-established task, has a very low multi-valued mapping factor of 0.75% and high mutual information of 87.7%. This confirms that estimating BP from PPG is a challenging and an ill-conditioned problem and a more principled approach is needed in the future for framing such health measure prediction tasks.
When designing end-to-end machine learning models researchers often use techniques such as: ( A ) providing the model with observations from similar patients, ( B ) constraining the task (e.g., limiting the distribution of labels), ( C ) calibrating models using data from a participant. When doing so it can often be difficult to identify how these steps impact the integrity of a model, or ( D ) preprocessing to filter out problematic samples (e.g., noisy inputs).
In this section, we present a systematic review of prior work predicting BP via PPG PWA (Figure 1 ), followed by a principled analysis using our proposed tools.
To motivate our work, we analyzed recent research 21 , 22 , 23 , 27 , 29 , 34 , 48 , 49 , 50 , 51 that reported results predicting BP via PPG PWA (see Table 1 ). These works relied on the MIMIC 52 dataset (Appendix C.1) containing continuous PPG signals and the corresponding arterial BP values. They evaluated their performance against the AAMI 53 and/or BHS 54 standards (Appendix C.2). We found that they were prey to some common pitfalls, which resulted in misleading claims and over-optimistic results. For simplicity, we focus on the prediction of Systolic BP (SBP) rather than Diastolic BP (DBP), as SBP has a wider statistical range.
Before we begin, we should note that not all work (e.g., 35 , 36 , 37 , 38 , 50 ) followed the AAMI/BHS standards accurately. For example, some reported results on a test-set of fewer than 85 subjects. Moreover, although these works use the same MIMIC dataset, we found a lack of standardization in the train-test data splits and different BP ranges used for evaluation (due to differences in how the data were filtered) across the literature 27 , 29 . With the absence of official source code, it was difficult to reproduce prior results and compare different methods. Hence, we trained our own reference deep learning model (Figure 2 ), similar to the methods presented in prior research 27 , 34 , 49 . The reference network takes a three-channel input consisting of the original PPG waveform, along with its first and second derivatives, and outputs the predicted SBP value. The model consists of an eight-layer residual CNN 55 with 1D convolutions, and is trained using a mean squared error loss. We also explored 2D convolution based CNN models, such as DenseNet-161 28 and ResNet-101 55 , taking spectrogram of the 1D PPG signal 27 and/or raw waveform as input. Among these, we found that the 1D CNN based architecture performed best.
Our reference network, is used to evaluate the impact on performance due to the issues mentioned in Section " Review of the Results and Limitations of Prior Work ". The network has 28M trainable parameters, takes a 3-channel input (PPG, VPG, APG), and outputs the SBP prediction. The model is optimized using a mean squared error loss.
Every participant (P) has multiple data records (R), and each record is divided into multiple overlapping windows (W). Each window forms a data sample . In No-Overlap, the train and test data are split at the participant level, while in Domain-Overlap, the split happens at the record level, and in Data-Overlap, the split happens at the window level.
The goal of any machine learning model is to generalize well to test data that will be seen in real-world settings 56 . Even with a large training set, it is very unlikely that identical samples to those seen in the training set will appear at test time, thus generalization is crucial. Unfortunately, good performance on a training dataset does not always translate to good performance on a test set, as models can overfit . This is especially true for modern deep neural networks, which are highly over-parameterized and can easily memorize the training data 57 . Thus, evaluating test performance accurately is an important step in understanding how a model will function in the real world. For this, the test data needs to be pristine, i.e., without any contamination from the training data. Unfortunately, contamination can and does happen in several ways.
We observed two types of overlap between training and testing splits (Figure 1 A): data-overlap and domain-overlap .
Data-overlap corresponds to overlap of actual segments from a sample between the train-test sets. Domain-overlap is more subtle, where although there is no direct overlap of samples, leakage may occur due to similarities in train-test data. In our case, it corresponds to using different records from the same patient in both the test and train sets (Figure 3 ).
Here, we consider a particular example from the literature, PPG2ABP 21 , where the authors propose a U-Net based architecture to predict the ABP (Arterial BP) waveform from PPG. They obtain impressive results with a bias of \(-1.19\) mmHg and error standard deviation (SD) of 8.01 mmHg (Note, there is an error in the computation of standard deviation in the PPG2ABP 21 evaluation script. We report the corrected results here.) on the SBP prediction task (Table 2 ), which is close to the AAMI standard. However, while analyzing their source code, we found both data and domain overlaps.
Data-Overlap : The PPG2ABP 21 data processing pipeline divides each PPG record ( \(\sim\) 6 mins long) into 10-second windows with an overlap of 5 seconds (URL: github.com/nibtehaz/PPG2ABP/blob/master/codes/data_processing.py) (Figure 3 ). Using overlapping windows helps, as it increases the size of the training data. However, the problem arises when these 10-second samples are randomly split into train and test sets. Since the overlapping windows are generated before the random train-test split, the train and test sets can have samples with the same overlapping regions (Figure 3 ). A deep learning model can memorize values based on these overlapping portions, leading to artificially high accuracy on the test set.
Domain-Overlap : Due to the physiological differences between individuals, person-dependent models often outperform person-independent models 58 . For example, for the BP prediction task, a model can learn the normal range of an individual’s BP and leverage that to provide more accurate predictions. Since the knowledge of an individual’s identity can impact a model’s accuracy, it is important that the identity of the subject is not leaked (even implicitly) between test and train sets, especially while building person-independent models. Since the PPG signature has been shown to identify an individual 59 , the presence of PPG signals from the same individual in both train and test data can thus leak identity. This turns out to be the case in the PPG2ABP work 21 , as they randomly split PPG records into test and train sets, resulting in different windows from the same patient present in both test and train sets (Figure 3 ).
To quantitatively evaluate the impact of data leakage, we compare the performance of the PPG2ABP network on three splits (Figure 3 ) – (1) No-overlap : the dataset is partitioned at the patient level with an 80-20% train-test split, (2) Domain-Overlap : each patient has multiple records ( \(\sim\) 6 mins long), and these records are randomly split 80–20% between the train-test set, i.e., records from the same patient can be present in both the training and test sets, and (3) Data-Overlap : We use the split provided by PPG2ABP 21 which divides the records into overlapping windows followed by an 80-20% train-test split. All splits consist of 10-second windows with an overlap of 5-seconds to maintain consistency with the split proposed in PPG2ABP. Table 2 shows the performance of the PPG2ABP network over the three splits. Domain-overlap significantly increases the accuracy of the PPG2ABP network from a standard deviation of 23.1 to 16.2 mmHg; Data-Overlap further improves the standard deviation to 8.01 mmHg. This analysis clearly shows that leakages, however subtle, can lead to seemingly high but artificial improvements. Note that for all analysis in the rest of this paper, we use the No-Overlap split.
Health-related data typically have non-uniform Gaussian distributions, with the highest data density near the “normal” (or healthy) range, and falling exponentially as we move away from the normal. We observe a similar trend for BP data in both the Aurora-BP 60 (Appendix C.1) and MIMIC datasets (see Figure 4 ). While points far from normal are rare, they are often crucial events (abnormally low or high BP) indicating serious health issues requiring medical attention.
The distribution of systolic BP values in the: (left) Aurora-BP dataset and (right) MIMIC dataset. In the MIMIC dataset, the SBP values lie in the range 65–200 mmHg, however prior works ignore samples with SBP values outside the range of 75–165 mmHg.
However, we found that researchers often discard so-called “outliers” 22 , 27 , 29 (Figure 1 B), arguing that such samples are unlikely or have occurred due to noise in the data collection process. For example, the MIMIC dataset has SBP values ranging between 65 and 200 mmHg (75-220 mmHg in Aurora-BP), but Schlesinger et al. 27 ignored samples outside the range of 75–165 mmHg, referring to the discarded values as “improbable”. Similarly, Cao et al. 22 and Hill et al. 29 use a constrained range of 75–150 mmHg, while according to the British Hypertension Society literature, 140–159 mmHg is Grade-1 (mild) hypertension, 160–179 mmHg is Grade-2 (moderate) hypertension, and \(\ge\) 180 mmHg is Grade-3 (severe) hypertension 54 .
Constraining the data range has two problems. First, it leads to an incomplete evaluation, as the model is neither trained nor tested on samples from the discarded ranges. Second, since the statistical range of the output is reduced, this makes the prediction task artificially “easier” (i.e., a lower error can be achieved more easily), which may result in promising but misleading results. To quantitatively study the impact of constraining data ranges, we conducted an experiment using our reference network with different filtering of the data range. Table 3 shows the performance of our network when trained with three different SBP ranges: 65–200, 75–165 and 75–150 mmHg. Even small restrictions in the output range can lead to a significant (perceived) improvement in accuracy, e.g., reducing the SBP upper limit from 165 to 150 mmHg results in an \(\sim\) 11.4% improvement in the standard deviation. This can be explained as samples at the extremes often result in the highest prediction errors (as models tend to predict closer to the mean of the distribution making predictions on samples with very high or low ground-truth BP values the most inaccurate).
The exclusion of samples with SBP measurements outside the range \(\ge\) 165 mmHg and \(\le\) 75 mmHg during the training of machine learning models may result in overlooking crucial physiological features, potentially concealing serious health conditions and introducing bias into the model. This practice not only limits the scope of the developed models but also hinders conclusions about their generalizability and real-world applicability, as they become less representative of the diverse patient populations they are intended to serve.
The relationships between health measures (e.g., PPG and BP) are often person dependent. For example, blood pressure ( bp ) is dependent on the patient’s heart rate ( hr ), blood viscosity ( visc ), stiffness of blood vessels ( stif ), etc., i.e., \(bp = f(hr, visc, stif, ...)\) . While the PPG signal might capture heart rate well, it may not be able to capture viscosity- and stiffness-related information. To solve this problem, it is common to propose the use of a calibration step, wherein a few PPG samples from each patient along with gold-standard BP values are used to calibrate the function f for that patient (Figure 1 C). The model then learns a calibrated function, \({\hat{f}}\) , for a specific patient, i.e., \(bp={\hat{f}}(hr)\) , where the patient-specific parameters ( visc , stif , ...) are folded into \({\hat{f}}\) .
The literature does not offer a universally effective calibration strategy. Cao et al.’s 22 method needs to be calibrated every time before a BP prediction to find the optimal fit on the wrist for the watch, while Schlesinger et al.’s 27 model needs to be calibrated once to find the offset value between the model and the true prediction. As blood pressure may not change drastically within minutes (at rest) and significant trends might be observed only over the course of a few months owing to lifestyle changes or the influence of medication 61 , it becomes important to pay attention to questions such as: What is the frequency of re-calibration? Is the calibration approach prone to changes in other environmental factors? We believe that the calibration approaches reported in prior work risk over-fitting by memorizing patient-level local temporal characteristics, and that evaluation is incomplete given that they do not evaluate BP prediction over longer time scales.
To understand the influence of calibration, we evaluate the prediction performance under different calibration strategies. Naïve Calibration simply predicts a constant calibrated value for the entire record. The constant value is computed as the mean of the ground truth values of the first three windows of a record. Offset Calibration uses our reference network, but adds an offset to the predicted value. The offset is computed in the calibration step as the difference between the predicted and ground truth BP of the test record’s first window. We found the Naïve Calibration to perform very well (Table 4 ), with a standard deviation of 8.61 mmHg, close to the AAMI standard. However, predicting a constant BP value for a patient is clearly incorrect. This inconsistency underscores problems with the evaluation methodology. Since typical records in MIMIC have short time intervals (average length = 6 minutes) compared to the time scales at which BP changes, predicting a constant value gives deceivingly good accuracy. An appropriate evaluation of calibration methods should consider time scales spanning the intended re-calibration duration. For example, if re-calibration is planned every six months, the method should be evaluated with patients tracked over at least a six month time period. To demonstrate that calibration systems can quickly deteriorate over time, we analyzed the performance of Offset Calibration as the time from the calibration window increases. Although the method performs well for the first few days, the error rates increase dramatically after that (Figure 5 A).
The MIMIC dataset comprises ICU-patients data, with artifacts due to patient movement, sensor degradation, transmission errors from bedside monitors, and human errors in post-processing data alignment. The impact of these artifacts is visible in both the PPG and ABP waveforms as missing data, noisy data, and sudden changes in amplitude and frequency (Figure 6 ). To clean the signal, researchers 27 , 29 have used band-pass filters to remove noise in the high frequency ( \(\ge\) 16 Hz) and low frequency ( \(\le\) 0.5 Hz) ranges, followed by auto-correlation to filter signals that are not strongly correlated with themselves. The auto-correlation step removes samples with uneven amplitude and/or frequency. After cleaning the MIMIC dataset (Figure 1 D), Schlesinger et al. 27 used less than 5% of the total data for training their neural network, while Hill et al. 29 and Slapnicar et al. 34 used less than 10% of the total MIMIC data. This suggests that “clean” data is rare. Although filtering datasets to remove some noise is often an essential step to train a machine learning model 56 , excessive filtering of data can result in overfitting. Models trained on such clean data might achieve high performance on a clean test set; however, they might fail in practice, as it is difficult to obtain such clean signals in a real-world scenario.
( A ) The offset calibration method’s performance falls off quickly after the first few days. ( B ) Performance of our reference network with different auto-correlation thresholds on the MIMIC dataset.
Examples of poor-quality photoplethysmography signals from the MIMIC dataset.
To understand the impact of filtering on a dataset, we measure the performance of our reference network at different auto-correlation thresholds. Figure 5 (B) plots the performance of our reference network in predicting SBP and the percentage of filtered data for each auto-correlation threshold. The performance of the network improves by 29.7% and the dataset size decreases by 63%, as we increase the auto-correlation threshold from 0 to 0.8.
We propose and utilize two tools—based on multi-valued mappings and on mutual information (Appendix B)—to estimate if the input signal is a good predictor of the output. Using our proposed tools we performed a principled analysis to study the relationship between PPG and BP. For comparison, we also used our tools on heart rate (HR) and reflected wave arrival time (RWAT) estimation for which it is known that the PPG signal is a strong predictor.
Checking for Multi-Valued Mappings : We use Algorithm 1 to find multi-valued mappings corresponding to data samples that are close in the input space but distant in the output space. As discussed in Section B.1, to compute the distance between two PPG inputs, we first align them using cross-correlation, followed by computing their Euclidean distance. We divide the dataset records into non-overlapping two-second windows and treat them as individual inputs. We set an input distance threshold of 1.0, which corresponds to a per-time sample threshold of \(4e-3\) (each 2s PPG window had 250 samples). For the output, we set thresholds of 8 mmHg, 8 bps, and 0.02s for the BP, HR and RWAT prediction tasks, respectively. We found very few multi-valued mappings for the HR and RWAT tasks, while a large number of mappings for the SBP task (Table 5 ). In the MIMIC dataset, for 33.2% of the 2-second windows, we found another window for the same patient who was close in the input PPG space but had a significantly different SBP output. When limiting the search to different patients, for 15.0% of the windows we could still find such matches. This implies that the task of predicting BP from PPG is ill-conditioned. Figure 7 shows examples of such multi-valued mappings, with highly similar input PPG waveforms but significantly different output arterial BP waveforms. In comparison, for the HR and RWAT tasks, the number of such matches is much smaller at 0.02% and 0.08% intra-patient, respectively, suggesting much better conditioning.
In the process of filtering multi-valued mappings, it is essential to consider the specificity of sensors and the methodologies employed in preprocessing the input data. Our analysis focuses on intra-patient and inter-patient multi-valued mappings within specific datasets, namely MIMIC and AURORA, rather than across different datasets. This approach ensures that our findings are not confounded by variations in sensor quality or the nuances of measurement techniques. Additionally, it enables us to apply preprocessing steps that preserve amplitude information.
Multi-valued mappings. Examples of PPG waveforms (PPG \(_{i}\) and PPG \(_{j}\) ) that are very similar and have corresponding arterial blood pressure waveforms (ABP \(_{i}\) and ABP \(_{j}\) ) that are quite different. This highlights the existence of similar features that map to different targets, which makes the task of blood pressure prediction via PPG pulse wave analysis ill-conditioned.
Evaluating Mutual Information : To estimate mutual information (MI) between the PPG signal and the target output (BP/HR/RWAT), we use the K-nearest neighbours based approach proposed by Kraskov et al. 62 . We leverage dimensionality reduction to make MI estimation tractable, using handcrafted and auto-encoder learned feature representations. We report the mutual information of the input features and target variable, as well as the entropy of the target variable. Note that the target variable’s entropy is the maximum achievable mutual information. Thus, the ratio of MI and target variable entropy represents the target information fraction encoded by the input, which we call Info-Fraction . We found Info-Fraction to be a more intuitive measure than the absolute MI values, and use it to compare the predictive power of PPG across the different tasks.
Handcrafted Features : As suggested by Takazawa 63 and Elgendi et al. 64 , we calculate handcrafted features (see Table 6 ) from the PPG waveform (Figure 8 ). Due to the absence of a time-aligned ECG waveform in the MIMIC dataset, we extracted the relevant handcrafted features only from the PPG waveform. Table 7 presents the MI of these individual features with respect to the BP prediction task for both the MIMIC and Aurora-BP datasets, along with the MI when all these features are combined and regarded as a single multi-dimensional input. We found that even the combined features set encode a small fraction of the total target entropy. For example, in the MIMIC dataset, the combined features’ Info-Fraction is just 9.5%, while heart rate itself contributes an Info-Fraction of 4.1%. Similar observations hold true for the Aurora-BP dataset. This hints that the PPG signal does not have enough information to predict BP in this dataset, and moreover the prediction is highly dependent on the heart rate.
For the Aurora-BP dataset we have the demographic data (age, weight, height) of the subjects, as well as time-aligned PPG and ECG waveforms. This allows us to calculate additional features, e.g., radial Pulse Arrival Time (rPAT) and other derived features 60 . Prior work 7 has used PAT to estimate blood pressure. Moreover, the Aurora-BP dataset has multiple readings for each subject in different positions (e.g., sitting, at rest, and supinated) which helps us add delta features reflecting the difference between features in the two conditions. Despite this, we found the entropy results for the Aurora-BP dataset to be similar to the MIMIC dataset, with the handcrafted features able to capture only 9.8% of the entropy of blood pressure (Table 8 ). On the other hand, for the HR and RWAT prediction tasks, the handcrafted features captured 87.7% and 64.6% entropy, respectively (ground truth for HR is derived from the ECG sensor data and RWAT from the tonometric sensor data). This further strengthens our finding that the PPG signal even with additional information from the ECG waveform has limited information to predict BP.
A visual description of the hand-crafted features calculated from the PPG and ECG waveforms. The systolic ramp time ( \(\frac{dp}{dt}\) ) is defined as \(\frac{y_{2}-y_{1}}{t_{2}-t_{1}}\) .
Auto-encoder Features : As an alternative to handcrafted features, we train an auto-encoder on the raw PPG waveform to obtain a set of low dimensional features. We use a five-layer perceptron (MLP) auto-encoder with ReLU activation and a bottleneck layer of 20 neurons. The model was trained with the Adam optimizer (learning rate of 0.001) and a mean-squared error loss (with a stopping point when the loss saturated at <0.1). Training time on a single NVIDIA P100 was under an hour. Table 9 shows the MI of the combined bottleneck features with respect to the BP, HR and RWAT prediction tasks. Although the auto-encoder features are more comprehensive and have higher MI compared to the hand-crafted features, the Info-Fraction for BP prediction (12.9% for MIMIC and 8.7% for Aurora-BP) is still much lower compared to that for HR (92.2% for MIMIC and 93.1% for Aurora-BP) and RWAT (70.1% for Aurora-BP) prediction tasks.
There are two possible implications of these findings. First, it may suggest that PPG signals lacks adequate information for accurate BP prediction. Alternativelty, it could imply a limitation in the current sensor technology, making sensors susceptible to confounding factors like external noise and environmental variations, thereby hindering the accuracy of BP prediction.
Our results reveal that BP prediction via pulse wave analysis of the PPG signal is still an unsolved task and far from the acceptable AAMI and BHS standards. By performing a systematic review and accompanying experiments we found several issues being overlooked in the prior work that have led to seemingly over-optimistic results. These pitfalls can be categorized into data splits that leak information from test samples into the training set, heavy constraints on the task that remove challenging samples and reduce the range of target values substantially, calibration methods that seem to be practically problematic, and unreasonable preprocessing that filters the data to an unrealistic extent such that any noise is unacceptable. These pitfalls simplify the machine learning task, creating a deceptive perception of ease in model training, which results in inflated performance. Ultimately, this translates to models that overfit the training data, hindering their ability to generalize effectively and handle real-world data variations.
While research on non-invasive approaches to estimate health vitals such as heart rate and blood oxygen saturation has made tremendous progress, enabling these technologies to become ubiquitous in the last decade, progress in non-invasive cuffless BP estimation has been slow despite witnessing similar research interest. This has prompted us to question whether the problem itself is ill-conditioned and if the PPG signal contains enough information to predict BP in the first place. In order to answer these questions, we have proposed a set of tools based on multi-valued mapping and mutual information to check if an input signal is a good predictor of the desired output. The multi-valued mapping checker allows us to find samples close in input space but far in output space. We found many such samples in both the MIMIC and Aurora-BP datasets. Searching for multi-valued mappings was trivial once appropriate distance metric and thresholds were defined, qualitative and quantitative results show that almost identical PPG waveforms can have very different BP waveforms. Next, we looked at the entropy of the features by computing mutual information. MI was extremely low for both hand-crafted and learned auto-encoder features. In comparison, heart rate and RWAT prediction tasks from PPG PWA have much lower multi-valued mapping factors and much higher mutual information indicating that the task is relatively well conditioned compared to PPG PWA to BP. We believe that these tools are relevant for feasibilty analysis in similar tasks involving wearable data, such as predicting stress levels from PPG 65 , 66 , 67 and estimating blood glucose levels from PPG 68 , 69 , 70 .
Our study does not aim to prove that blood pressure estimation from PPG PWA is impossible; however, it indicates that the task is very challenging, and evaluating performance fairly is non-trivial. To navigate this complexity, we present a set of tools that future research can leverage to avoid the pitfalls identified here. We hope our work can serve as a milestone and stimulate further discussion and exploration in the following areas: (1) Data Diversity: Collecting comprehensive datasets that represent subjects from diverse demographics and cardiovascular physiologies. (2) Multiple modalities: Exploring the integration of PPG with other physiological signals holds immense potential for enhancing prediction accuracy and providing a more holistic view of cardiovascular health. (3) Improved Sensors: Advancements in sensor technology are crucial to capture higher-fidelity PPG data with minimal external noise and environmental variables. We believe that focusing on these critical areas will lead to generalizable and scalable solutions, empowering a future where everyone can benefit from the accessibility and convenience of non-invasive cuffless BP estimation.
All the data used in this work is publicly available. The MIMIC 71 ( https://archive.physionet.org/physiobank/database/mimic2wdb/ ) and Aurora-BP 60 ( https://github.com/microsoft/aurorabp-sample-data ) datasets can be accessed by researchers after completing the necessary steps stated by the creators of those datasets.
Bhat, K. S., Jain, M. & Kumar, N. Infrastructuring telehealth in (in)formal patient-doctor contexts. Proc. ACM Hum.-Comput. Interact. 5 , https://doi.org/10.1145/3476064 (2021).
Haleem, A., Javaid, M., Singh, R. P. & Suman, R. Telemedicine for healthcare: Capabilities, features, barriers, and applications. Sensors International 2 , 100117. https://doi.org/10.1016/j.sintl.2021.100117 (2021).
Article PubMed PubMed Central Google Scholar
Bestsennyy, O. Telehealth: A quarter-trillion-dollar post-covid-19 reality? (2021).
Patel, S. Take a pulse on health and wellness with your phone (2021).
Poh, M.-Z., McDuff, D. J. & Picard, R. W. Non-contact, automated cardiac pulse measurements using video imaging and blind source separation. Opt. Express 18 , 10762–10774 (2010).
Article ADS CAS PubMed Google Scholar
Scully, C. G. et al. Physiological parameter monitoring from optical recordings with a mobile phone. IEEE Trans. Biomed. Eng. 59 , 303–306 (2011).
Wang, E. J. et al. Seismo: Blood Pressure Monitoring Using Built-in Smartphone Accelerometer and Camera, 1–9 (Association for Computing Machinery, New York, NY, USA, 2018).
Book Google Scholar
Wang, E. J. et al. Hemaapp: noninvasive blood screening of hemoglobin using smartphone cameras. In Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing , 593–604 (2016).
Gairola, S. et al. Smartkc: Smartphone-based corneal topographer for keratoconus detection. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 5 , https://doi.org/10.1145/3494982 (2022).
Aggarwal, A. et al. Towards automating retinoscopy for refractive error diagnosis (Proc. ACM Interact. Mob, Wearable Ubiquitous Technol, 2022).
Liu, X. et al. Mobilephys: Personalized mobile camera-based contactless physiological sensing. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, Volume Issue 1, March 2022, Article No.: 24 https://doi.org/10.1145/3517225 (2022). arXiv:2201.04039 .
Liu, X., Fromm, J., Patel, S. N. & McDuff, D. Multi-task temporal shift attention networks for on-device contactless vitals measurement. In Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M. & Lin, H. (eds.) Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual (2020).
Fuchs, F. D. & Whelton, P. K. High blood pressure and cardiovascular disease. Hypertension 75 , 285–292. https://doi.org/10.1161/hypertensionaha.119.14240 (2020).
Article CAS PubMed Google Scholar
Sun, D. et al. Type 2 diabetes and hypertension. Circ. Res. 124 , 930–937 (2019).
Article CAS PubMed PubMed Central Google Scholar
Mukkamala, R. et al. Toward ubiquitous blood pressure monitoring via pulse transit time: Theory and practice. IEEE Trans. Biomed. Eng. 62 , 1879–1901 (2015).
Buxi, D., Redouté, J.-M. & Yuce, M. R. A survey on signals and systems in ambulatory blood pressure monitoring using pulse transit time. Physiol. Meas. 36 , R1-26 (2015).
Article PubMed Google Scholar
Sharma, M. et al. Cuff-less and continuous blood pressure monitoring: A methodological review. Technologies Basel 5 , 21 (2017).
Article Google Scholar
Bramwell, J. C. & Hill, A. V. The velocity of pulse wave in man. Proceedings of the Royal Society of London. Series B, Containing Papers of a Biological Character 93 , 298–306, https://doi.org/10.1098/rspb.1922.0022 (1922).
Wesseling, K., Jansen, J., Settels, J. & Schreuder, J. Computation of aortic flow from pressure in humans using a nonlinear, three-element model. J. Appl. Physiol. 74 , 2566–2573 (1993).
Mukkamala, R., Stergiou, G. S. & Avolio, A. P. Cuffless blood pressure measurement. Annu. Rev. Biomed. Eng. 24 , 203–230 (2022).
Ibtehaz, N. & Rahman, M. S. Ppg2abp: Translating photoplethysmogram (ppg) signals to arterial blood pressure (abp) waveforms using fully convolutional neural networks (2020). arXiv:2005.01669 .
Cao, Y., Chen, H., Li, F. & Wang, Y. Crisp-BP: Continuous Wrist PPG-Based Blood Pressure Measurement, 378–391 (Association for Computing Machinery, New York, NY, USA, 2021).
Meneguitti Dias, f. et al. A machine learning approach to predict arterial blood pressure from photoplethysmography signal. In Computing in Cardiology Conference (CinC) (Computing in Cardiology, 2022).
Han, M. et al. Feasibility and measurement stability of smartwatch-based cuffless blood pressure monitoring: A real-world prospective observational study. Hypertens. Res. 46 , 922–931 (2023).
Groppelli, A. et al. Feasibility of blood pressure measurement with a wearable (watch-type) monitor during impending syncopal episodes. J. Am. Heart Assoc. https://doi.org/10.1161/jaha.122.026420 (2022).
Mukkamala, R. et al. Evaluation of the accuracy of cuffless blood pressure measurement devices: challenges and proposals. Hypertension 78 , 1161–1167 (2021).
Schlesinger, O., Vigderhouse, N., Eytan, D. & Moshe, Y. Blood pressure estimation from ppg signals using convolutional neural networks and siamese network. In ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) , 1135–1139, https://doi.org/10.1109/ICASSP40776.2020.9053446 (2020).
Huang, G., Liu, Z., Van Der Maaten, L. & Weinberger, K. Q. Densely connected convolutional networks. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) , 2261–2269, https://doi.org/10.1109/CVPR.2017.243 (2017).
Hill, B. L. et al. Imputation of the continuous arterial line blood pressure waveform from non-invasive measurements using deep learning. Sci. Rep. 11 , 15755. https://doi.org/10.1038/s41598-021-94913-y (2021).
El-Hajj, C. & Kyriacou, P. Deep learning models for cuffless blood pressure monitoring from PPG signals using attention mechanism. Biomed. Signal Process. Control 65 , 102301. https://doi.org/10.1016/j.bspc.2020.102301 (2021).
Hasanzadeh, N., Ahmadi, M. M. & Mohammadzade, H. Blood pressure estimation using photoplethysmogram signal and its morphological features. IEEE Sens. J. 20 , 4300–4310. https://doi.org/10.1109/jsen.2019.2961411 (2020).
Article ADS Google Scholar
Hsu, Y.-C., Li, Y.-H., Chang, C.-C. & Harfiya, L. N. Generalized deep neural network model for cuffless blood pressure estimation with photoplethysmogram signal only. Sensors 20 , 5668. https://doi.org/10.3390/s20195668 (2020).
Article ADS PubMed PubMed Central Google Scholar
Hajj, C. E. & Kyriacou, P. A. Cuffless and continuous blood pressure estimation from PPG signals using recurrent neural networks. In 2020 42nd Annual International Conference of the IEEE Engineering in Medicine Biology Society (EMBC) , https://doi.org/10.1109/embc44109.2020.9175699 (IEEE, 2020).
Slapničar, G., Mlakar, N. & Luštrek, M. Blood pressure estimation from photoplethysmogram using a spectro-temporal deep neural network. Sensors (Basel) 19 , 3420 (2019).
Article ADS PubMed Google Scholar
Wang, L., Zhou, W., Xing, Y. & Zhou, X. A novel neural network model for blood pressure estimation using photoplethesmography without electrocardiogram. J. Healthc. Eng. 1–9 , 2018. https://doi.org/10.1155/2018/7804243 (2018).
Dey, J., Gaurav, A. & Tiwari, V. N. InstaBP: Cuff-less blood pressure monitoring on smartphone using single PPG sensor. In 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC) , https://doi.org/10.1109/embc.2018.8513189 (IEEE, 2018).
Zhang, Y. & Feng, Z. A SVM method for continuous blood pressure estimation from a PPG signal. In Proceedings of the 9th International Conference on Machine Learning and Computing , https://doi.org/10.1145/3055635.3056634 (ACM, 2017).
Jain, M., Deb, S. & Subramanyam, A. V. Face video based touchless blood pressure and heart rate estimation. In 2016 IEEE 18th International Workshop on Multimedia Signal Processing (MMSP) , https://doi.org/10.1109/mmsp.2016.7813389 (IEEE, 2016).
Gaurav, A., Maheedhar, M., Tiwari, V. N. & Narayanan, R. Cuff-less PPG based continuous blood pressure monitoring — a smartphone based approach. In 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC) , https://doi.org/10.1109/embc.2016.7590775 (IEEE, 2016).
Gao, S. C., Wittek, P., Zhao, L. & Jiang, W. J. Data-driven estimation of blood pressure using photoplethysmographic signals. In 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC) , https://doi.org/10.1109/embc.2016.7590814 (IEEE, 2016).
Duan, K., Qian, Z., Atef, M. & Wang, G. A feature exploration methodology for learning based cuffless blood pressure measurement using photoplethysmography. In 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC) , https://doi.org/10.1109/embc.2016.7592189 (IEEE, 2016).
Suzuki, A. Inverse-model-based cuffless blood pressure estimation using a single photoplethysmography sensor. Proc. Inst. Mech. Eng. [H] 229 , 499–505. https://doi.org/10.1177/0954411915587957 (2015).
Kurylyak, Y., Lamonaca, F. & Grimaldi, D. A neural network-based method for continuous blood pressure estimation from a PPG signal. In 2013 IEEE International Instrumentation and Measurement Technology Conference (I2MTC) , https://doi.org/10.1109/i2mtc.2013.6555424 (IEEE, 2013).
Slapnicar, G., Lustrek, M. & Marinko, M. Continuous blood pressure estimation from PPG signal. Informatica (Slovenia) 42 (2018).
Panwar, M., Gautam, A., Biswas, D. & Acharyya, A. PP-net: A deep learning framework for PPG-based blood pressure and heart rate estimation. IEEE Sens. J. 20 , 10000–10011. https://doi.org/10.1109/jsen.2020.2990864 (2020).
Mousavi, S. S. et al. Blood pressure estimation from appropriate and inappropriate PPG signals using a whole-based method. Biomed. Signal Process. Control 47 , 196–206. https://doi.org/10.1016/j.bspc.2018.08.022 (2019).
Shimazaki, S., Kawanaka, H., Ishikawa, H., Inoue, K. & Oguri, K. Cuffless blood pressure estimation from only the waveform of photoplethysmography using cnn. In 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC) , https://doi.org/10.1109/embc.2019.8856706 (IEEE, 2019).
Harfiya, L. N., Chang, C.-C. & Li, Y.-H. Continuous blood pressure estimation using exclusively photopletysmography by LSTM-based signal-to-signal translation. Sensors https://doi.org/10.3390/s21092952 (2021).
Shimazaki, S., Kawanaka, H., Ishikawa, H., Inoue, K. & Oguri, K. Cuffless blood pressure estimation from only the waveform of photoplethysmography using CNN. Annu Int Conf IEEE Eng Med Biol Soc 2019 , 5042–5045 (2019).
PubMed Google Scholar
Tazarv, A. & Levorato, M. A deep learning approach to predict blood pressure from PPG signals. CoRR abs/2108.00099 (2021). arXiv:2108.00099 .
Mahmud, S. et al. A shallow u-net architecture for reliably predicting blood pressure (bp) from photoplethysmogram (ppg) and electrocardiogram (ecg) signals (2021). arXiv:2111.08480 .
Kachuee, M., Kiani, M. M., Mohammadzade, H. & Shabany, M. Cuff-less high-accuracy calibration-free blood pressure estimation using pulse transit time. In 2015 IEEE International Symposium on Circuits and Systems (ISCAS) , 1006–1009, https://doi.org/10.1109/ISCAS.2015.7168806 (2015).
Stergiou, G. S. et al. A universal standard for the validation of blood pressure measuring devices: Association for the advancement of medical Instrumentation/European society of Hypertension/International organization for standardization (AAMI/ESH/ISO) collaboration statement. Hypertension 71 , 368–374 (2018).
O’Brien, E. et al. European society of hypertension recommendations for conventional, ambulatory and home blood pressure measurement. J. Hypertens. 21 , 821–848 (2003).
He, K., Zhang, X., Ren, S. & Sun, J. Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) , 770–778, https://doi.org/10.1109/CVPR.2016.90 (2016).
Domingos, P. A few useful things to know about machine learning. Commun. ACM 55 , 78–87. https://doi.org/10.1145/2347736.2347755 (2012).
Zhang, C., Bengio, S., Hardt, M., Recht, B. & Vinyals, O. Understanding deep learning (still) requires rethinking generalization. Commun. ACM 64 , 107–115. https://doi.org/10.1145/3446776 (2021).
D’mello, S. K. & Kory, J. A review and meta-analysis of multimodal affect detection systems. ACM computing surveys (CSUR) 47 , 1–36 (2015).
Karimian, N., Guo, Z., Tehranipoor, M. & Forte, D. Human recognition from photoplethysmography (ppg) based on non-fiducial features. In 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) , 4636–4640, https://doi.org/10.1109/ICASSP.2017.7953035 (2017).
Mieloszyk, R. et al. A comparison of wearable tonometry, photoplethysmography, and electrocardiography for cuffless measurement of blood pressure in an ambulatory setting. IEEE Journal of Biomedical and Health Informatics (2022).
Hinderliter, A. L. et al. The long-term effects of lifestyle change on blood pressure: One-year follow-up of the ENCORE study. Am. J. Hypertens. 27 , 734–741. https://doi.org/10.1093/ajh/hpt183 (2013).
Kraskov, A., Stögbauer, H. & Grassberger, P. Estimating mutual information. Phys. Rev. E https://doi.org/10.1103/physreve.69.066138 (2004).
Article MathSciNet Google Scholar
Takazawa, K. Clinical usefulness of the second derivative of a plethysmogram (acceleration plethysmogram). J. Cardiol. 23 , 207–217 (1993).
Google Scholar
Elgendi, M. et al. The use of photoplethysmography for assessing hypertension. NPJ Digit. Med. 2 , 1–11 (2019).
Iqbal, T. et al. Stress monitoring using wearable sensors: A pilot study and stress-predict dataset. Sensors (Basel) 22 , 8135 (2022).
Celka, P., Charlton, P. H., Farukh, B., Chowienczyk, P. & Alastruey, J. Influence of mental stress on the pulse wave features of photoplethysmograms. Healthc. Technol. Lett. 7 , 7–12 (2020).
Elzeiny, S. & Qaraqe, M. Stress classification using photoplethysmogram-based spatial and frequency domain images. Sensors (Basel) 20 (2020).
Zhang, G. et al. A noninvasive blood glucose monitoring system based on smartphone PPG signal processing and machine learning. IEEE Trans. Industr. Inform. 16 , 7209–7218 (2020).
Hossain, S. et al. Estimation of blood glucose from PPG signal using convolutional neural network. In 2019 IEEE International Conference on Biomedical Engineering, Computer and Information Technology for Health (BECITHCON) (IEEE, 2019).
Bent, B. et al. Engineering digital biomarkers of interstitial glucose from noninvasive smartwatches. NPJ Digit. Med. 4 , 89 (2021).
Goldberger, A. L. et al. PhysioBank, PhysioToolkit, and PhysioNet: Components of a new research resource for complex physiologic signals. Circulation 101 , E215-20 (2000).
Bonnafoux, P. Auscultatory and oscillometric methods of ambulatory blood pressure monitoring, advantages and limits: a technical point of view. Blood Press. Monit. 1 , 181–185 (1996).
CAS PubMed Google Scholar
Da He, D., Winokur, E. S., Heldt, T. & Sodini, C. G. The ear as a location for wearable vital signs monitoring. In 2010 Annual International Conference of the IEEE Engineering in Medicine and Biology , 6389–6392 (IEEE, 2010).
Holz, C. & Wang, E. J. Glabella: Continuously sensing blood pressure behavior using an unobtrusive wearable device. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 1 , 1–23 (2017).
Ding, X.-R., Zhang, Y.-T., Liu, J., Dai, W.-X. & Tsang, H. K. Continuous cuffless blood pressure estimation using pulse transit time and photoplethysmogram intensity ratio. IEEE Trans. Biomed. Eng. 63 , 964–972 (2015).
Bellman, R. & Kalaba, R. On adaptive control processes. IRE Trans. Autom. Control. 4 , 1–9 (1959).
Jaynes, E. T. Information theory and statistical mechanics. Phys. Rev. 106 , 620–630. https://doi.org/10.1103/PhysRev.106.620 (1957).
Article ADS MathSciNet Google Scholar
Download references
Authors and affiliations.
Microsoft Research, Bengaluru, India
Suril Mehta, Nipun Kwatra & Mohit Jain
Microsoft Research, Redmond, USA
Daniel McDuff
You can also search for this author in PubMed Google Scholar
S.M. performed analyses, designed experiments, and wrote the manuscript. N.K. designed the experiments and wrote the manuscript. M.J. designed the experiments and wrote the manuscript. D.M. designed the experiments and wrote the manuscript.
Correspondence to Suril Mehta .
Competing interests.
The authors declare no competing interests.
Publisher's note.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
The gold-standard for blood pressure measurement, used in Intensive Care Units and Operating Theatres, requires an invasive procedure that involves inserting a cannula needle into an artery. The cannula needle is connected to a transducer that converts the pulse signal to the arterial pressure waveform, providing continuous pulse-level BP measurements. Such invasive measurement is not feasible outside of a hospital setting, therefore two alternative cuff-based non-invasive procedures—auscultatory and oscillometry methods—are widely used 72 . However, these methods do not provide continuous measurement, Hence researchers 7 , 73 , 74 have been actively working on developing novel methods to accurately estimate blood pressure in a non-invasive continuous manner. A majority of the proposed methods involve calculating the Pulse Transit Time (PTT) which is inversely correlated to BP. PTT is defined as the time taken by a pulse to travel between two arterial sites—one measured using PPG and the other captured from a different sensor. E.g., Ding et al. 75 captured ECG, He et al. 73 used Ballistocardiogram from the ear, Holz and Wang 74 collected accelerometer signals from the head, and Wang et al. 7 captured accelerometer signals using a smartphone pressed to the chest.
Considering the ease and accessibility of accurately measuring heart rate and heart rate variability via PPG captured from a smartphone or wearable, a natural extension is to attempt to calculate blood pressure solely by analyzing the PPG pulse wave. Recent works 22 , 27 , 29 , 48 , 50 have explored and published promising results for the BP prediction task from PPG pulse wave analysis. These proposed methods involve building data-driven regression models to learn meaningful features by leveraging the availability of large PPG-BP labelled datasets (MIMIC 52 ). For example, Schlesinger et al. 27 predicted BP using Convolution Neural Networks (CNN) trained on a frequency domain representation of the PPG signal and used siamese logic to calibrate BP predictions at run-time, Tazarv and Levorato 50 used a Long Short-Term Memory (LSTM) network with the PPG waveform as input, and Slapnicar et al. 34 proposed an ensemble network of 1-D CNNs and LSTMs on the raw and first two derivatives of the PPG signal. Some recent works 21 , 29 have proposed an extension to prior work by predicting the full Arterial Blood Pressure (ABP) waveform from the PPG signal using U-Net based architectures.
We propose two tools—based on multi-valued mappings and on mutual information—to estimate if the input to a model is a good predictor of the output.
If the input sensor signal ( x ) is a good predictor of an output health labels ( y ), it means there exists a function f , such that \(y=f(x)\) . Moreover, the function f should be well-conditioned, i.e., small changes in x should not lead to large changes in y . This is important to ensure that small amounts of noise in the sensor measurement (which are bound to happen in a real-world setting) do not lead to significant errors in the output. To test whether a task is well-conditioned, we propose searching for multi-valued mappings using Algorithm 1. Our multi-valued mapping algorithm searches for samples that are close in the input space but distant in the output space. If the algorithm is able to find such mappings, it means that the function f either does not exist, or is at best ill-conditioned.
Multi-valued Mapping Search
Algorithm 1 has two key components: a distance function for comparing the input samples and an optimal threshold value for filtering the multi-valued mappings.
Distance Function : Searching for multi-valued mappings in a dataset requires a metric to quantify the distance between the input samples. However, choosing the right distance function is not always obvious, and one needs to be careful about the implicit assumptions in any given metric. For example, cross-correlation, dynamic time warping (DTW) 76 , and Euclidean distance are ways to measure the distance between two time-series/waveforms, and each has specific characteristics—cross-correlation is phase invariant, DTW is scale invariant in the time dimension, and Euclidean distance is translation invariant. For cross-correlation, a sliding window dot-product of the two input data series is computed to find the point where the similarity is maximized; DTW computes an optimal match by reducing the minimum-edit distance between the two series; Euclidean distance measures the similarity between the two data series using the L2 distance.
Ideally, the distance function should align well with the task requirements. Among the three distance functions, DTW makes the similarity metric invariant with respect to the time scale. However, it is known that BP has a direct dependency on heart rate, which in turn is determined by the periodicity of the PPG waves. Thus, the time scale invariance property of DTW will result in information loss for this task, making it a bad choice as a distance function. The Euclidean distance used in isolation is not a good choice either, as even the same PPG signals slightly shifted in time can result in a high Euclidean distance value. Since the relationship between PPG and BP should not change with small shifts of the PPG signal forward or backward in time, such a distance metric is not suitable. Therefore, cross-correlation is ideal to create an appropriate distance metric. Although the cross-correlation based distance metric worked well in our experiments, we found that aligning PPG signals via cross-correlation followed by computing the Euclidean distance between the aligned signals appeared logical. We used this distance measure for all our experiments.
Optimal Threshold : After choosing the appropriate distance function, we need to identify an optimal distance threshold, below which two signals can be considered “equal”. However, it is not straightforward to find such a threshold. If the threshold is very generous (i.e., high), we will end up selecting distant input signals as equal, and obtain misleading multi-valued mappings. On the other hand, if the threshold is too strict (i.e., low), we may not find any multi-valued mappings even for ill-conditioned functions, as the chances of two input signals being identical, especially in the presence of noise, are very small. To identify the optimal threshold for filtering multi-valued mappings, we calculate the Euclidean distance between two consecutive aligned PPG waves, each 2 seconds in duration. This interval was chosen because it represents an ideal time frame in which the signal remains consistent. Ideally, the difference between 2 consecutive PPG waves should account for an irreducible error, and this can be used as a threshold for filtering multi-valued mappings. Figure 9 illustrates the results of this analysis, which indicates that a majority of the PPG waves exhibit a Euclidean distance of \(\le\) 1, which led us to choose 1.0 as the threshold for our experiment.
The distribution of Euclidean Distances between pairs of aligned consecutive PPG waves.
Note that our multi-valued mapping check is a one-way method, i.e., if we are able to find multi-valued mappings, it implies an ill-conditioned f ; however not finding multi-valued mappings does not guarantee existence of a well-defined f . This is because Algorithm 1 may fail to find signals close in the input space due to sparsity of the dataset. The mutual information check discussed next provides a complimentary method.
Mutual Information (MI) is an information theoretical measure of the dependence between two random variables X and Y , defined as:
where H is the Shannon entropy function ( \(H(X) = -\sum _{i} p(x_i) log(p(x_i))\) . For continuous analog data, it is computed via limiting density of discrete points (LDDP) 77 . The marginal entropies H ( X ) and H ( Y ) represent the amount of information needed to describe the outcome of the random variable. This is same as the uncertainty of the random variable. \(H(X \vert Y)\) and \(H(Y \vert X)\) are conditional entropies, and denote the amount of information needed to describe the outcome of one random variable when the value of the other variable is known. This can also be thought of as the amount of uncertainty left in one random variable when the other is known. The mutual information I can be then interpreted as the amount of information (or reduction in uncertainty) that knowing one variable provides about the other. For example, I ( X ; Y ) is zero if X and Y are independent, while it is maximum when X is a deterministic function of Y or vice-versa.
Mutual information can be an effective measure in our case to evaluate whether the input signal ( x ) can be a good predictor of the output health label ( y ). However, since the computation of MI relies on estimation of probability density functions of the random variables, it is non-trivial to robustly estimate the MI for high dimensional data such as the time series PPG data. To overcome this curse of dimensionality, we recommend the following dimensionality reduction approaches before computing the MI.
Auto-Encoder . Since MI is invariant under smooth invertible transformations of the variables, we propose using an auto-encoder to aggressively reduce the input space dimensionality. We train an auto-encoder with the least number of bottleneck features needed to achieve a target mean-squared reconstruction loss of 0.1 on the normalized dataset. For the MIMIC and Aurora-BP dataset, we achieved this target with a bottleneck size of 20, at which the MI estimation worked robustly.
Hand-Crafted Features . As an alternate solution to using an auto-encoder, we can use hand-crafted features extracted from the input signal based on prior literature 63 , 64 and use these features for MI estimation. For example, in the task of BP prediction from PPG signal, common features include normalized systolic slope, heart rate, heart rate variability, etc. The MI estimation process helps us understand the importance of each of these features both collectively and independently. Note that in the case of hand-crafted features, there is always the concern of completeness (i.e., if the features extracted enough information from the input needed for the task), thus we recommend the auto-encoder approach whenever possible.
C.1 datasets.
Our work builds on two datasets, the properties of which are critical to understand the results of our work.
MIMIC II : The MIMIC II dataset contains records of continuous high-resolution physiological waveforms of the patients in the ICU, such as ABP, PPG, and ECG sampled at 125Hz. The dataset consists of 67,830 records of varying duration from 30,000 patients 71 . For the purpose of our study, we perform our analysis on a pre-processed subset of the MIMIC II dataset, consisting of 12,000 records from 942 patients 52 . This subset is particularly useful for our analysis as it includes a sufficient number of patients for training and testing, compliant with AAMI standards, and has been commonly utilized in previous research(Table 1 ).
Aurora-BP : The Aurora-BP dataset 60 consists of 24,650 records from 483 subjects. Each subject has multiple records of varying duration, which were collected at rest or while performing activities such as exercise and brisk walk. The records are collected from multiple sensors/devices including optical PPG, EKG, tonometer, accelerometer, and cuff-based Blood Pressure.
To contextualize the performance of SBP (Systolic BP) prediction task, two benchmarks have been widely used: AAMI and BHS standards. The criteria of the AAMI (Association for the Advancement of Medical Instrumentation) standards 53 are that the test set should comprise of at least 85 subjects, with at least 10% of them having an SBP above 180 mmHg and at least 10% having an SBP under 100 mmHg. For a test device to be compliant with the AAMI standards, the SBP prediction must have a bias under 5 mmHg and error standard deviation (SD) under 8 mmHg on the test set. The BHS (British Hypertension Society) 54 standards criteria states that the test set should consist of at least 85 subjects and that the cohort should be representative of the target audience of the device. The performance of the test device is divided into grades (Table 10 ). Additionally, the test data should cover the overall pressure range, specifically in these three ranges: \(\le\) 130, 130–160, and \(\ge\) 160 mmHg.
Dataset size : To understand the effect of data size on MI, and verify whether our dataset had enough samples to enable robust MI estimation, we conducted the following experiment. We took a randomly selected slice of the data (ranging from 0.1 to 100% data) and computed the combined MI over 20 runs (this technique is known as bootstrapping). We performed this analysis for both the MIMIC and Aurora-BP datasets. As shown in Figures 10 (A) and (B), although the estimates at smaller dataset sizes resulted in high variation, the variation bounds are very tight at higher sizes. This imparts confidence that our MI estimates over the full datasets are robust. Interestingly, we also found that using a smaller dataset can result in higher estimates of the MI values. This may be explained by the fact that fewer multi-valued mappings might be observed in a smaller sample. Thus, having a small dataset might lead to an over optimistic perception of the relationship between input and output.
The effect of the number of ( A ) total windows (MIMIC dataset), ( B ) total patients (Aurora-BP dataset), and ( C ) age range (Aurora-BP dataset) on the mutual information between PPG PWA features and BP. We perform 20 runs with different random subsets of the data to plot the distributions. Optical PPG features similar to Table 7 were used for ( A ) and ( B ), while richer features (patient demographic data, PPG optical features and features derived using ECG, similar to Table 8 ) were used for ( C ). For each plot the corresponding features were combined and treated as a single multi-dimensional input for computing MI.
Participant’s Demography : Apart from data size, we found that even demographic factors, such as age, impacted mutual information. Figure 10 (C) shows the variation in combined MI with respect to age for the Aurora-BP dataset. In particular, we found that in the age group of 21-29 and 60-85 years, heart rate and weight were the most important features, which was not the case with the other age groups.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .
Reprints and permissions
Cite this article.
Mehta, S., Kwatra, N., Jain, M. et al. Examining the challenges of blood pressure estimation via photoplethysmogram. Sci Rep 14 , 18318 (2024). https://doi.org/10.1038/s41598-024-68862-1
Received : 18 June 2024
Accepted : 29 July 2024
Published : 07 August 2024
DOI : https://doi.org/10.1038/s41598-024-68862-1
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.
Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.
Soaring temperatures in New York, July 2010. Photo by Eric Thayer/Reuters
It’s not just the planet and not just our health – the impact of a warming climate extends deep into our cortical fissures.
by Clayton Page Aldern + BIO
In February 1884, the English art critic and polymath John Ruskin took the lectern at the London Institution for a pair of lectures on the weather. ‘The Storm-Cloud of the Nineteenth Century’ was his invective against a particular ‘wind of darkness’ and ‘plague-cloud’ that, in his estimate, had begun to envelope Victorian cities only in recent years. He had been taking careful meteorological measurements, he told a sceptical audience. He railed against the ‘bitterness and malice’ of the new weather in question; and, perhaps more importantly, about how it mirrored a certain societal ‘moral gloom’. You could read in us what you could read in the weather, he suggested.
July Thundercloud in the Val d’Aosta (1858) by John Ruskin. Courtesy Wikipedia
It was easy that February, and perhaps easy today, to disregard any alleged winds of darkness as the ravings of a madman. Clouds are clouds: even if Ruskin’s existed – which was a question of some contemporaneous debate – it would be untoward to imagine they bore any relationship with the human psyche. As Brian Dillon observed of the cloud lectures in The Paris Review in 2019, it can be hard to tell where Ruskin’s ‘bad weather ends and his own ragged, doleful mood begins.’ In 1886, Ruskin suffered a mental breakdown while giving a talk in Oxford. By the end of his life at the turn of the century, he was widely considered insane. His ramblings on meteorology and the human spirit aren’t exactly treated with the same gravitas as his books on J M W Turner.
And yet, for Ruskin, the clouds weren’t just clouds: they were juiced up by a ‘dense manufacturing mist’, as he’d noted in a diary entry. The plague-clouds embodied the miasma of the Industrial Revolution; the moral gloom was specifically that which arose from the rapid societal and environmental changes that were afoot. Ruskin’s era had seen relentless transformation of pastoral landscapes into industrial hubs. Everything smelled like sulphur and suffering. Soot-filled air, chemical and human waste, the clamour of machinery – these were more than just physical nuisances. They were assaults on the senses, shaping moods and behaviour in ways that were not yet fully understood.
Mining Area (1852-1905) by Constantin Meunier. Courtesy Wikipedia
Ruskin believed that the relentless pace of industrialisation, with its cacophony of tools and sprawling factories and environmental destruction, undermined psychological wellbeing: that the mind, much like the body, required a healthy social and physical environment to thrive. This was actually a somewhat new idea. (Isaac Ray, a founder of the American Psychiatric Association, wouldn’t define the idea of ‘mental hygiene’, the precursor to mental health, until 1893.) Instability in the environment, for Ruskin, begot instability in the mind. One reflected the other.
M ore than a century later, as we grapple with a new suite of breakneck environmental changes, the plague-clouds are again darkly literal. Global average surface temperatures have risen by about 1.1°C (2°F) since the pre-industrial era, with most of this warming occurring in the past 40 years. Ice is melting; seas are steadily rising; storms are – well, you know this story. And yet, most frequently, it is still a story of the world out there: the world outside of us. The narrative of climate change is one of meteorological extremes, economic upheaval and biodiversity losses. But perhaps it is worth taking a maybe-mad Ruskin seriously. What of our internal clouds? As the climate crisis warps weather and acidifies oceans and shatters temperature records with frightening regularity, one is tempted to ask if our minds are changing in kind.
Here are some of the most concerning answers in the affirmative. Immigration judges are less likely to rule in favour of asylum seekers on hotter days. On such days, students behave as if they’ve lost a quarter-year of education, relative to temperate days. Warmer school years correspond to lower rates of learning. Temperature predicts the incidence of online hate speech. Domestic violence spikes with warmer weather. Suicide , too.
In baseball, pitchers are more likely to hit batters with their pitches on hot days
But you already know what this feels like. Perhaps you’re more ornery in the heat. Maybe you feel a little slow in the head. It’s harder to focus and easier to act impulsively. Tomes of cognitive neuroscience and behavioural economics research back you up, and it’s not all as dire as domestic violence. Drivers honk their horns more frequently (and lean on them longer) at higher temperatures. Heat predicts more aggressive penalties in sport. In baseball, pitchers are more likely to hit batters with their pitches on hot days – and the outdoor temperature is an even stronger predictor of their tendency to retaliate in this manner if they’ve witnessed an opposing pitcher do the same thing.
In other words: it would appear the plague-clouds are within us, too. They illustrate the interconnectedness of our inner and outer worlds. They betray a certain flimsiness of human agency, painting our decision-making in strokes of environmental influence far bolder than our intuition suggests. And they throw the climate crisis into fresh, stark relief: because, yes, as the climate changes, so do we.
T he London Institution closed in 1912. These days, when you want to inveigh against adverse environmental-mind interactions, you publish a paper in The Lancet . And so that is what 24 mostly British, mostly clinical neurologists did in May 2024, arguing that the ‘incidence, prevalence, and severity of many nervous system conditions’ can be affected by global warming. For these researchers, led by Sanjay Sisodiya, professor of neurology at University College London in the UK, the climate story is indeed one of internal clouds.
In their survey of 332 scientific studies, Sisodiya and his colleagues show that climatic influence extends far beyond behaviour and deep into cortical fissures. Aspects of migraine, stroke, seizure and multiple sclerosis all appear to be temperature dependent. In Taiwan, report the authors, the risk of schizophrenia hospitalisation increases with widening daytime temperature ranges. In California , too, ‘hospital visits for any mental health disorder, self-harm, intentional injury of another person, or homicide’ rise with broader daily temperature swings. In Switzerland , hospitalisations for psychiatric disorders increase with temperature, with the risk particularly pronounced for those with developmental disorders and schizophrenia.
Outside the hospital, climate change is extending the habitable range of disease vectors like ticks, mosquitoes and bats, causing scientists to forecast an increased incidence of vector-borne and zoonotic brain maladies like yellow fever, Zika and cerebral malaria. Outside the healthcare system writ large, a changing environment bears on sensory systems and perception, degrading both sensory information and the biological tools we use to process it. Outside the realm of the even remotely reasonable, warming freshwater brings with it an increased frequency of cyanobacterial blooms, the likes of which release neurotoxins that increase the risk of neurodegenerative diseases such as amyotrophic lateral sclerosis (ALS, also known as Lou Gehrig’s disease).
Experiencing natural disasters in utero greatly increases children’s risk of anxiety, depression and ADHD
Indeed, recent studies suggest that climate change may be exacerbating the already substantial burden of neurodegenerative diseases like Parkinson’s and Alzheimer’s. In countries with warmer-than-average climates, more intense warming has been linked to a greater increase in Parkinson’s cases and, as Sisodiya et al note, the highest forecasted rates of increase in dementia prevalence are ‘expected to be in countries experiencing the largest effects of climate change’. Similarly, short-term exposure to high temperatures appears to drive up emergency department visits for Alzheimer’s patients. The air we breathe likely plays a complementary role: in Mexico City, for example, where residents are exposed to high levels of fine particulate matter and ozone from a young age, autopsies have revealed progressive Alzheimer’s pathology in 99 per cent of those under the age of 30.
The risks aren’t limited to those alive today. In 2022, for example, an epidemiological study revealed that heat exposure during early pregnancy is associated with a significantly increased risk of children developing schizophrenia, anorexia and other neuropsychiatric conditions. High temperatures during gestation have long been known to delay neurodevelopment in rats. Other scientists have shown that experiencing natural disasters in utero greatly increases children’s risk of anxiety, depression, attention-deficit/hyperactivity disorder and conduct disorders later in life. Such effects cast the intergenerational responsibilities of the Anthropocene in harsh new light – not least because, as Sisodiya and colleagues write, there is a tremendous ‘global disparity between regions most affected by climate change (both now and in the future) and regions in which the majority of studies are undertaken.’ We don’t know what we don’t know.
What we do know is that the brain is emerging, in study after study, as one of climate change’s most vulnerable landscapes.
It is a useful reorientation. Return to the horn-honking and the baseball pitchers for a moment. A focus on the brain sheds some potential mechanistic light on the case studies and allows us to avoid phrases like ‘wind of darkness’. Higher temperatures, for example, appear to shift functional brain networks – the coordinated behaviour of various regions – toward randomised activity. In extreme heat, scientists have taken note of an overworked dorsolateral prefrontal cortex (dlPFC), the evolutionarily new brain region that the neuroendocrinologist Robert M Sapolsky at Stanford University in the US calls ‘the definitive rational decider in the frontal cortex’. The dlPFC limits the degree to which people make impulsive decisions; disrupted dlPFC activity tends to imply a relatively heightened influence of limbic structures (like the emotionally attuned amygdala) on behaviour. More heat, less rational decision-making.
When extreme heat reaches into your mind and tips your scales toward violence, it is constraining your choices
The physicality of environmental influence on the brain is more widespread than the dlPFC – and spans multiple spatial scales. Heat stress in zebrafish, for example, down-regulates the expression of proteins relevant to synapse construction and neurotransmitter release. In mice, heat also triggers inflammation in the hippocampus, a brain region necessary for memory formation and storage. While neuroinflammation often plays an initially protective role, chronic activation of immune cells – like microglia and astrocytes – can turn poisonous, since pro-inflammatory molecules can damage brain cells in the long run. In people, hyperthermia is associated with decreased blood flow to this region. Psychologists’ observations of waning cognition and waxing aggression at higher temperatures makes a world of sense in the context of such findings.
The nascent field of environmental neuroscience seeks to ‘understand the qualitative and quantitative relationships between the external environment, neurobiology, psychology and behaviour’. Searching for a more specific neologism – since that particular phrase also encompasses environmental exposures like noise, urban development, lighting and crime – we might refer to our budding, integrative field as climatological neuroepidemiology. Or, I don’t know, maybe we need something snappier for TikTok. Neuroclimatology? Ecological neurodynamics?
I tend to prefer: the weight of nature.
The weight forces our hands, as in the case of the behavioural effects highlighted above. When extreme heat reaches into your mind and tips your scales toward violence, it is constraining your choices. By definition, impulsive decisions are rooted in comparatively less reflection than considered decisions: to the extent that a changing climate influences our reactions and decision-making, we should understand it as compromising our perceived free will. The weight of nature is heavy. It displaces us.
It is also a heavy psychological burden to carry. You are likely familiar with the notion of climate anxiety . The phrase, which tends to refer to a near-pathological state of worry and fear of impending environmental destruction, has never sat particularly well with me. Anxiety, as defined by the Diagnostic and Statistical Manual , is usually couched in terms of ‘excessive’ worry. I’m not convinced there’s anything excessive about seeing the climatic writing on the wall and feeling a sense of doom. Perhaps we ought to consider the climate-anxious as having more developed brains than the rest of the litter – that the Cassandras are the only sane ones left.
I ’m not exactly joking. Neuroscience has begun to study the brains in question, and not for nothing. The midcingulate cortex, a central hub in the brain’s threat-detection circuitry, may hold some clues to the condition’s biological basis: in one 2024 study , for example, researchers at Northern Michigan University in the US found that people who reported higher levels of anxiety about climate change showed distinct patterns of brain structure and function in this region, relative to those with lower levels of climate anxiety – and irrespective of base levels of anxiety writ large. In particular, the climate-anxious brain appears to play host to a smaller midcingulate (in terms of grey matter), but one that’s functionally more connected to other key hubs in the brain’s salience network, a system understood to constantly scan the environment for emotionally relevant information. In the salience network, the midcingulate cortex works hand in hand with limbic structures like the amygdala and insula to prepare the body to respond appropriately to this type of information. In people with climate anxiety, this network may be especially attuned to signals of climate-related threats.
Rather than indicating a deficiency, then, a diminutive midcingulate might reflect a more efficient, finely honed threat-detection system. The brain is well known to prune redundant connections over time, preserving only the most useful neural pathways. Selective sculpting, suggest the Michigan researchers, may allow the climate-anxious brain to process worrisome information more effectively, facilitating rapid communication between the midcingulate and other regions involved in threat anticipation and response. In other words, they write, the climate-anxious midcingulate might be characterised by ‘more efficient wiring’.
This neural sensitivity to potential dangers could be both a blessing and a curse. On one hand, it may attune some people to the very real perils of the future. The midcingulate is critical for anticipating future threats, and meta-analyses have found the region to be consistently activated when people contemplate unpredictable negative outcomes. Given the looming spectre of climate catastrophe, a hair-trigger threat-detection system could be an adaptive asset.
Climate anxiety is not just a sociocultural phenomenon. It has a theoretically identifiable neural correlate
On the other hand, argue the researchers:
[T]he complexity, uncertainty, as well as temporal and geographical distance of the climate crisis, in addition to its global nature, may lead individuals to deprioritising the risks associated with climate change, or becoming overwhelmed and disengaged – a state sometimes referred to as ‘eco-paralysis’.
An overactive midcingulate has been implicated in clinical anxiety disorders, and the new findings suggest that climate anxiety shares some of the same neural underpinnings. (It’s important to recall that climate anxiety seems to be distinct from generalised anxiety, though, as the brain differences observed in the Michigan study couldn’t be explained by overall anxiety levels.)
Ultimately, while speculative, these findings suggest that climate anxiety is not merely a sociocultural phenomenon, but one with theoretically identifiable neural correlates. They provide a potential biological framework for understanding why some people may be more psychologically impacted by climate change than others. And they raise intriguing questions about whether the brains of the climate anxious are particularly well-suited for confronting the existential threat of a warming world – or whether they are vulnerable to becoming overwhelmed by it. In all cases, though, they illustrate that world reaching inward.
T here is perhaps a flipside to be realised here. A changing climate is seeping into our very neurobiology. What might it mean to orient our neurobiology toward climate change?
Such is the premise of a 2023 article in Nature Climate Change by the neuroscientist Kimberly Doell at the University of Vienna in Austria and her colleagues, who argue that the field is well positioned to inform our understanding of climate-adaptation responses and pro-environmental decision-making. In the decades since Ruskin shook his fists at the sky, environmental neuroscience has begun to probe the reciprocal dance between organisms and their ecological niches. We know now that the textures of modern environments – green spaces, urban sprawl, socioeconomic strata – all leave their mark on the brain. Climate change is no different.
Accordingly, argue Doell et al, scientists and advocates alike can integrate findings from neuroscience to improve communications strategies aimed at spurring climate action. They want to turn the tables, taking advantage of insights from neurobiology and cognitive neuroscience to more effectively design climate solutions – both within ourselves and for society as a whole.
The Anthropocene’s fever dream is already warping our wetware
We have models for this type of approach. Poverty research, for instance, has long implicated socioeconomic conditions with subpar health. In more recent years, neuroscience has reverse-engineered the pathways by which poverty’s various insults – understimulation, toxic exposures, chronic stress – can erode neural architecture and derail cognitive development. Brain science alone won’t solve poverty, yet even a limited understanding of these mechanisms has spurred research in programmes like Head Start, a family-based preschool curriculum that has been shown to boost selective attention (as evident in electrophysiological recordings) and cognitive test scores. While the hydra of structural inequity is not easily slain, neuroscientists have managed to shine some light on poverty’s neural correlates, flag its reversible harms, and design precision remedies accordingly. This same potential, argue Doell and her colleagues, extends to the neuroscience of climate change.
To realise this potential, though, we need to further understand how the Anthropocene’s fever dream is already warping our wetware. Social and behavioural science have begun cataloguing the psychological fallout of a planet in flux, but a neural taxonomy of climate change awaits. The field’s methodological and conceptual arsenal is primed for the challenge, but honing it will demand alliances with climate science, medicine, psychology, political science and beyond.
Some are trying. For example, the Kavli Foundation in Los Angeles, US, recognising a need for answers, last year put out a call for scientists to investigate how neural systems are responding to ecological upheaval. With a trial $5 million, the foundation aims to illuminate how habitat loss, light pollution and other environmental insults may be influencing the molecular, cellular and circuit-level machinery of brains, human and otherwise. The central question is: in a biosphere where change is the only constant, are neural systems plastic enough to keep pace, or will they be left struggling to adapt?
The first wave of researchers to take up Kavli’s challenge are studying a diverse array of creatures, each uniquely positioned to reveal insights about the brain’s resilience in the face of planetary disruption. Wolfgang Stein at Illinois State University in the US and Steffen Harzsch at University of Greifswald in Germany, for example, focus on crustaceans, seeking to understand how their neural thermal regulators cope with rising temperatures in shallow and deep waters. Another group has targeted the brains of cephalopods, whose RNA-editing prowess may be key to their ability to tolerate plummeting oxygen levels in their increasingly suffocating aquatic habitats. A third Kavli cohort, led by Florence Kermen at University of Copenhagen in Denmark, is subjecting zebrafish to extreme temperatures, scouring their neurons and glial cells for the molecular signatures that allow them to thrive – even as their watery world heats up.
These initial investments have sparked federal curiosity. In December 2023, the US National Science Foundation joined forces with Kavli, inviting researchers to submit research proposals that seek to probe the ‘modulatory, homeostatic, adaptive, and/or evolutionary mechanisms that impact neurophysiology in response to anthropogenic environmental influence’. We may not be in arms-race territory yet, but at least there’s a suggestion that we’re beginning to walk in the right direction.
T he brain, that spongy command centre perched atop our spinal cord, has always been a black box. As the climate crisis tightens its grip, and the ecological ground beneath our feet grows ever more unsteady, the imperative to pry it open and peer inside grows more urgent by the day. Already, we’ve begun to glimpse the outlines of a new neural cartography, sketched in broad strokes by the likes of Sisodiya and his colleagues. We know now that the brain is less a static lump of self-regulating tissue than it is a dynamic, living landscape, its hills and valleys shaped by the contours of our environment. Just as the Greenland ice sheet groans and buckles under the heat of a changing climate, so too do our synapses wither and our neurons wink out as the mercury rises. Just as rising seas swallow coastlines, and forests succumb to drought and flame, the anatomical borders of our brains are redrawn by each new onslaught of environmental insult.
But the dialogue between brain and biosphere is not a one-way street. The choices we make, the behaviours we pursue, the ways in which we navigate a world in crisis – all of these decisions are reflected back onto the environment, for good or for ill. So, I offer: in seeking to understand how a changing climate moulds the contours of our minds, we must also reckon with how the architecture of our thoughts might be renovated in service of sustainability.
Bit by bit, synapse by synapse, we can chart a course through the gathering plague-cloud
The cartographers of the Anthropocene mind have their work cut out for them. But in the hands of neuroscience – with its shimmering brain scans and humming electrodes, its gene-editing precision and algorithmic might – there is something approaching a starting point. By tracing the pathways of environmental impact to their neural roots, and by following the cascading consequences of our mental processes back out into the world, we might yet begin to parse the tangled web that binds the fates of mind and planet.
This much is clear: as the gears of the climate crisis grind on, our brains will be swept along for the ride. The question is whether we’ll be mere passengers, or whether we’ll seize the controls and steer towards something resembling a liveable future. The weight of nature – the immensity of the crisis we face – is daunting. But it need not be paralysing. Bit by bit, synapse by synapse, we can chart a course through the gathering plague-clouds. It was Ruskin, at a slightly more legible moment in his life, who offered: ‘To banish imperfection is to destroy expression, to check exertion, to paralyse vitality.’ Even if we somehow could, we ought not banish the alleged imperfections of environmental influence on the mind. Instead, we ought to read in them an intimate, vital relationship between self and world.
In this, climatological neuroepidemiology – young and untested though it may be – is poised to play an outsized role. In gazing into the black box of the climate-altered mind, in illuminating the neural circuitry of our planetary predicament, the field offers something precious: a flicker of agency in a world that often feels as if it’s spinning out of control. It whispers that the levers of change are within reach, lodged in the squishy confines of our crania, waiting to be grasped. And it suggests that, even as the weight of nature presses down upon us, we might yet find a way to press back.
Falling for suburbia
Modernists and historians alike loathed the millions of new houses built in interwar Britain. But their owners loved them
Michael Gilson
Computing and artificial intelligence
Mere imitation
Generative AI has lately set off public euphoria: the machines have learned to think! But just how intelligent is AI?
Anthropology
Your body is an archive
If human knowledge can disappear so easily, why have so many cultural practices survived without written records?
Helena Miton
Illness and disease
Empowering patient research
For far too long, medicine has ignored the valuable insights that patients have into their own diseases. It is time to listen
Charlotte Blease & Joanne Hunt
Seeing plants anew
The stunningly complex behaviour of plants has led to a new way of thinking about our world: plant philosophy
Stella Sandford
Sex and sexuality
Sexual sensation
What makes touch on some parts of the body erotic but not others? Cutting-edge biologists are arriving at new answers
David J Linden
Subscribe for full access to The Hollywood Reporter
From grief to leadership: building a movement in mike brown’s memory (guest column).
Ten years after the police killing of Michael Brown in Ferguson, Missouri sparked a nationwide civil rights and police reform movement — and the release of a new short film about his life, 'Happy Birthday MikeMike' — Brown's mother Lezley McSpadden-Head writes about honoring his life and impact with The Michael O.D. Brown We Love Our Sons & Daughters Foundation.
By Lezley McSpadden-Head
Shelley duvall and me, the family trauma that went into '3 body problem' (guest column).
There is an annual memorial for Mike, and this year made the tenth. But it’s difficult for me to attend. It feels eerily reminiscent of the day he died — the day he was killed. I still mourn my son, and without accountability, “MikeMike” is killed again, making this a decade of no accountability, justice or peace. Instead of losing myself in the grief of his death, I celebrate his life and birthday instead, because the life he lived is more important than his tragic end. And there’s one thing I’ve learned since that day: I haven’t made plans since 2014. They say when you make plans, God laughs because He’s already made His own. I never understood that until Aug. 9, 2014, when a single phone call shattered my plans, changed my life and, in retrospect, changed the world. To this day, that call set me on a relentless fight for justice that would take me to the United Nations (UN), exposing the world to the brutal injustice my family faced. I’m still on a mission to reopen Mike’s case and demand the accountability he deserves.
I founded The Michael O.D. Brown We Love Our Sons & Daughters Foundation in 2015 to create a safe space for mothers like me. It was a way for me to work through the grief and loneliness I felt. There were a lot of demands on me suddenly. People crowned me “The Mother of a Movement.” Then, they said I wasn’t doing enough. But I wasn’t prepared for that. My heart was destroyed. I could barely breathe, and I felt like an unbearable weight was crushing me. Every anniversary of his death gets overstimulating. Everybody wants to talk about it again, and it seems like some have capitalized on the tragedy for their own fundraising. And that’s not right, because Mike gets lost in all of that.
I didn’t ask for my son to leave this earth before me, or for my family to become the faces of a national tragedy. I was consumed by rage, hopelessness and confusion all while trying to be respectable and strong for everyone else — just hanging on by a tattered thread. So, the first program I launched was Rainbow of Mothers for women of all ethnicities who have lost a child to violence. I’d learned while previously traveling for support that though Black people are killed nearly three times as much as other ethnicities, there were women of every background grieving like me.
Everything I do is in Mike’s honor, and his life has inspired the foundation’s four pillars: health, justice, family and education.
I initially worked with law students at Howard University for five years to push for legislative changes, like promoting the Mike Brown bill reintroduced by Congresswoman Cori Bush (H.R.8914). This bill increases access to mental and behavioral health services for people affected by violent encounters with law enforcement. It’s to change public perceptions and raise awareness about systemic issues, especially systemic racism.
We offer grief management and self-care programs for mothers. We also provide support groups and financial assistance to help families heal and rebuild after loss.
For children, we run tutoring, mentorship and financial literacy programs like Camp Brown Kids and Brown’s Cousin’s Candy, teaching them about entrepreneurship, finances and budgeting. I also created a Memorial Scholarship. Mike had just graduated high school the year Wilson killed him, and he could have received a scholarship like the one my foundation just awarded to 15 students. The requirements mirror his passion for the performing arts. They also include an interest in social justice/activism and were available for students with a minimum 2.5 GPA. We gave away $45,000 this year and are looking to grow to give more.
But honoring Mike goes beyond scholarships and programs; it also involves tackling the mental health challenges that have deeply affected our family.
Even more shocking was his choice of a church in Ferguson. I think that’s brave of him and am proud of him for overcoming his trauma and embracing his faith.
We’ve all faced mental health challenges, each of us going through therapy at various points. My daughter initially went to college, but when other students discovered who she was, it became overwhelming, and she quit to come back home. I still have my moments, waves of grief that pull me down, but I do my best to cope. The world may have moved on, but the emptiness is our constant presence. I fill it by pouring it into others daily. Amid this personal struggle, the racism we faced only intensified our pain and isolation.
Racism is still alive and well; the harassment I faced and the division I experienced after Mike’s death made that clear. The first time I was called the N-word was after I lost Mike. Ferguson itself was once a “sundown town,” a place where Black people weren’t welcome after dark. Bob McCulloch, who was the seven-term St. Louis County Prosecutor, was known for never prosecuting a cop or letting Black people get due process because a Black man had killed his father (who was also a police officer) years ago.
I want people to know that we need to fight for laws that protect us as Black people, and we need them before the next tragedy strikes. If we really want to be valued as a whole person and not three-fifths of a person, we must tell them. If we want equality, we must demand that. If we want justice, we must go after it. The fight for Mike is a fight for justice that he and others deserve. I will not rest until my son’s legacy is one of change, unity, and hope for all families.
Lezley McSpadden-Head is a renowned author and social justice advocate, best known as the mother of Michael O.D. Brown, the African-American teenager whose tragic death at the hands of police officer Darren Wilson in Ferguson, Missouri, on August 9, 2014, became a catalyst for the Black Lives Matter movement. Born and raised in a close-knit community, McSpadden-Head was just 16 when she welcomed her son, affectionately known as “Mike Mike,” into the world. His untimely death shook her to the core and propelled her into the national spotlight. In her memoir, Tell the Truth & Shame the Devil , McSpadden-Head reflects on her journey as a mother and the deep connection she shared with her son, offering a poignant narrative that highlights her unwavering strength and resilience. Through her powerful storytelling, she continues to advocate for justice and reform, amplifying the voices of those affected by systemic inequalities.
Sign up for THR news straight to your inbox every day
Talaria media ramps up slate with female-led wrestling feature, navy seal comedy, kim kahana, stuntman who starred in ‘danger island’ and doubled for charles bronson, dies at 94, ‘frozen 3’ to hit theaters over thanksgiving in 2027, ridley scott was “hugely relieved” when first watching ‘alien: romulus,’ but gave notes that made fede álvarez “punch the door”, george clooney is “a little irritated” with quentin tarantino: “dude, f*** off”, francis ford coppola’s ‘megalopolis,’ films from pedro almodovar and max minghella join toronto film fest.
It is no doubt that technology has influenced medical services in varied ways. Therefore, it would be fair to conclude that technology has positively affected healthcare. First, technology has improved access to medical information and data (Mettler 33). One of the most significant advantages triggered by technology is the ability to store and ...
"Over the years, mental health and technology have started touching each other more and more, and the pandemic accelerated that in an unprecedented way," says Naomi Torres-Mackie, PhD, the head of research at The Mental Health Coalition, a clinical psychologist at Lenox Hill Hospital, and an adjunct professor at Columbia University."This is especially the case because the pandemic has ...
Potential harmful effects of extensive screen time and technology use include heightened attention-deficit symptoms, impaired emotional and social intelligence, technology addiction, social isolation, impaired brain development, and disrupted sleep. However, various apps, videogames, and other online tools may benefit brain health.
Over the next few decades, the practice of medicine will become increasingly virtual, aided by digital technologies like artificial intelligence, telehealth, and wearable devices. Harvard Medical School professor Jagmeet Singh is witnessing many of these changes firsthand. His new book, Future Care: Sensors, Artificial Intelligence, and the ...
In a synopsis of 10 articles we present ample evidence that the use of digital technology may influence human brains and behavior in both negative and positive ways. For instance, brain imaging techniques show concrete morphological alterations in early childhood and during adolescence that are associated with intensive digital media use.
100 Words Essay on Technology's Impact On Health Technology: A Double-Edged Sword for Health. Technology has become an inseparable part of our lives, transforming various aspects, including our health. While it offers numerous benefits, it also poses potential risks. Let's explore both sides of the coin.
Over the past several decades, the development and accelerated advancement of digital technology has prompted change across virtually all aspects of human endeavor. The positive and negative effects of these changes have been and will remain the focus of active speculation, including the implications for human health.
NICK ALLEN: Use digital technology to our advantage. It is appealing to condemn social media out of hand on the basis of the — generally rather poor-quality and inconsistent — evidence ...
Too much sedentary time has been linked to an increased risk of a range of health conditions, including obesity, heart disease, cancer, and diabetes. The Covid-19 pandemic - which kept people at home, increased reliance on digital technology, and saw sporting events around the world canceled - didn't help.
Short Title up to 8 words. Twenty-third Americas Conference on Information Systems, Boston, 2017 1. Impact of Technology on Health & Wellness. TREO Talk Paper. Kimberly Deranek, Ph.D. Nova ...
Information technology can have a great impact on the variety of jobs and their necessity. As is understood, new technology needs appropriate knowledge. The healthcare employees who can adapt to the change will continue and those who can't will be eliminated. New technology has even changed the approach of the patients.
Here are 10 lines on the impact of technology. Feel free to add them to your speech or any writing topics related to technology. 'Excessive use of electronic devices causes physical, psychological and social problems.'. 'Technology is the use of scientific knowledge for our well-being.'. 'Technology is a good servant but a bad master.'.
Digital technology can be harmful to your health. Experts at a Zócalo/UCLA event point to lack of sleep, weight gain and other issues. Jia-Rui Cook. March 29, 2016. A s we hurtle with delight into a future where a wristwatch can tell us how many steps we've taken each day and a few taps on a screen can bring up a video chat with relatives ...
Technology has contributed fundamentally in improving people's lifestyles. It has improved communication by incorporating the Internet and devices such as mobile phones into people's lives. The first technological invention to have an impact on communication was the discovery of the telephone by Graham Bell in 1875.
Since the COVID-19 pandemic, there has been an increase in the use of digital health in education, information sharing and public health surveillance. 1 This shift is likely attributed to the widespread availability of digital technologies and devices, such as computers, multimedia technologies, smartphones and mobile applications. In this issue, we published three local articles that provide ...
Abstract. Health technologies have been and shall always be an integral part of the health system. Appropriate technologies provide solutions to improve healthcare services at an affordable cost. New biomedical, bioengineering and digital technologies continue to swamp the health system and consume a major part of the health budget.
The second essay examines the impacts of IT enablers and health motivators on people's online health information search behaviors. We characterize users' online health information search behaviors along three dimensions: the frequency of online health information search, the diversity of online health information usage, and the preference of ...
Abstract. Technology is doing wonders to the health care world. We have advanced tremendously from previous years and researchers are still finding ways to improve the system. Technology in health care is so important because there are many barriers that stop people from receiving the help and care they need due to many reasons like language ...
250 Words Essay on Negative Effects Of Technology On Health Too Much Screen Time. When we use our phones, computers, or tablets for a long time, it can be bad for our eyes. Staring at screens can make our eyes tired and can cause headaches. This is because our eyes have to work hard to look at the bright light and small text on screens.
Good morning everyone present here, today I am going to give a speech on the impact of technology on our health. There is evidence indicating both the detrimental impacts of technology and its excessive use, even though some forms of technology may have improved the world. Social media and mobile gadgets may cause psychological problems as well ...
The analysis of the articles showed positive and negative factors associated with the use of technologies by children. The main losses caused by technology use in childhood are excessive time connected to the internet, worsening of mental health, and changes in the circadian rhythm.
Such technology can have a significant impact on health screening, chronic disease management and remote monitoring. ... Our review of the prior work reveals that many papers fall prey to data ...
Just as the Greenland ice sheet groans and buckles under the heat of a changing climate, so too do our synapses wither and our neurons wink out as the mercury rises. Just as rising seas swallow coastlines, and forests succumb to drought and flame, the anatomical borders of our brains are redrawn by each new onslaught of environmental insult.
But honoring Mike goes beyond scholarships and programs; it also involves tackling the mental health challenges that have deeply affected our family. Officer Wilson didn't just steal Mike's ...
IMAGES
COMMENTS
Find detailed information about the CARRERA 290 sailboat, such as hull type, rigging type, dimensions, displacement, ballast, sail area, and more. Compare the CARRERA 290 with other sailboats and join the sailboat forum.
Feb 17, 2011. #3. I race against a Carrera 290 that has a retractable pole and ayso. It rates PHRF SOCAL 78 72 60 Bouy/RLC/OWC respectively. In anything below 10kts and calm seas the boat is on fire, higher winds and steeper waves cause it to loose upwind, but downwind/reaching again it just lights up.
A 29-foot fiberglass monohull racer with carbon mast, boom, rudder and sails. Refurbished in 2013, it has a road-worthy trailer and two outboards. See photos, description and contact information.
Carrera 290 preowned sailboats for sale by owner. Carrera 290 used sailboats for sale by owner.
Learn about the Carrera 290, a 1992 monohull sailboat designed by Håkan Södergren and built by Edgecomb Marine Group (USA). Find out its hull speed, sail area/displacement ratio, ballast/displacement ratio, and other calculations.
Nice boats though, look like a blast in a breeze. There is a 280 now for sale in westbrook ct and there is a 290 there that has been for sale for a long time. If you looked at my first post you would know. It is a Casey 29 aka Casey Carrera 290. He has only built one, last i knew he sold it to a guy in California.
It has been reel good boat lot's of fun. Has won a lot of races against Olson 30. Sails are good shape. It is race ready. Has jib 155,135,110.
The Carrera 290 is a 29.17ft fractional sloop designed by Håkan Södergren and built in fiberglass since 1992. The Carrera 290 is an ultralight sailboat which is a very high performer. It is very stable / stiff and has a low righting capability if capsized. It is best suited as a racing boat.
Carrera 290 for sale. Andrew Norris July 19, 2024 -/4 . by Andrew Norris Published: July 19, 2024 (3 weeks ago) $9,500. Category. Boats / 30ft or less. Location. Cincinnati, OH . Well maintained sailboat for sale. Fast and fun, with updated sails and equipment. Was raced for 5 years in the Chesapeake Bay until I relocated to Cincinnati.
SA/Disp: 32.61. Est. Forestay Len.: 36.15. Description: 1993 Carrera 290: This boat is ready to race, 1 year old full batten mainsail and #1 Genoa, faired racing bottom, dual spreader mast, 4 Harken winches, high-tech racing lines, masthead and fractional symmetrical spinnakers, 13' long cockpit great for racing or cruising, dual axle trailer ...
Carrera 290, 1994. A very, well cared for fresh water vessel. Good sail inventory, motor, and trailer....all in race ready condition. Fast and fun boat! Builder: Carrera Performance Yachts. Designer: Hakan Sodergron. Hull No: CPV29016A494 (#16 of 26) vPHRF rating: 99. LOA: 29ft 2in.
Blue Water Surf Value Rank (BWSVR) 6876. Capsize Comfort Value Rank (CCVR)
2007 Carrera Boats 290 Effect. Find Carrera Boats 290 boats for sale near you, including boat prices, photos, and more. Locate Carrera Boats dealers and find your boat at Boat Trader!
Carrera 290, 1994. Diamondhead, Mississippi, $13,000. PINOCCHIO is a full out racer. 2950 pounds. S/A of 32.7. Finn keel with bulb. Full specifications on Sail Data. To win you need a good crew and keep the boat upright. There is a learning curve.
Well maintained sailboat for sale. Fast and fun, with updated sails and equipment. Was raced for 5 years in the Chesapeake Bay until I relocated to Cincinnati. ... This Performance Carrera 290 : Added 12-Dec-2023 Performance Sailboats Performance 29s Ohio Performances. Featured Sailboats: Home. Register & Post. View All Sailboats. Search. Avoid ...
This ratio assess how quickly and abruptly a boat's hull reacts to waves in a significant seaway, these being the elements of a boat's motion most likely to cause seasickness. Read more. Formula. Comfort ratio = D ÷ (.65 x (.7 LWL + .3 LOA) x Beam 1.33) D: Displacement of the boat in pounds; LWL: Waterline length in feet; LOA: Length ...
Seller's Description. 1994 Carrera 290 Avenger fully refitted and race ready. A proven winner on the Chesapeake, Lake Erie and Northeast Florida. Hull 24 of 25 so she doesn't suffer from some of the early Carrera "growing pains". Repainted deck, faired bottom and Awl-Grip hull. Gizmos as follows: Boat is ready to go.
Novosibirsk is the third-largest city in Russia. Situated in southwestern Siberia, Novosibirsk has a population of over 1.6 million people, making it one of the largest and most vibrant cities in the country.. The city was founded in 1893. Novosibirsk was established as a railway junction on the Trans-Siberian Railway, playing a significant role in the development of Siberia.
Novosibirsk [a] is the largest city and administrative centre of Novosibirsk Oblast and the Siberian Federal District in Russia.As of the 2021 Census, it had a population of 1,633,595, [19] making it the most populous city in Siberia and the third-most populous city in Russia after Moscow and Saint Petersburg.It is also the most populous city in the Asian part of Russia.
The higher a boat's D/L ratio, the more easily it will carry a load and the more comfortable its motion will be. The lower a boat's ratio is, the less power it takes to drive the boat to its nominal hull speed or beyond. Read more. Formula. D/L = (D ÷ 2240) ÷ (0.01 x LWL)³ D: Displacement of the boat in pounds. LWL: Waterline length in feet
The all-new 18 CRS from ALK2 Powerboats brings the excitement of a 2 door sports car to the water! The wide beam, gives this craft a big boat feel at a small price. ... The 18 CRS is ready to build on the ALK2 Powerboats legacy. CAPACITY: 5. LENGTH: 17' 8" BEAM: 8' 2" FREEBOARD: 24" DRAFT: 11" FUEL CAPACITY: 25 GAL. EXPLORE THE SPORTY FEATURES .....
1992 29.2' Carrera 290 sailboat for sale in Shoreacres Houston Texas
You are free: to share - to copy, distribute and transmit the work; to remix - to adapt the work; Under the following conditions: attribution - You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.