- TROLLS WORLD TOUR (2020) – re. IMPORTANT WELCOME BACK UPDATE: Welcome BACK, everyone, to Watts Up Reviews! Due to the ongoing COVID-19 pandemic, we've had to take a temporary five for the last.
- HARLEY QUINN: BIRDS OF PREY (2020) . Ah, the 2020s. A new decade, a reboot of the Roaring 1920s. With a new decade comes a new line of superhero movies that 2008 started for Marvel Studios with 'Iron.
- PARASITE (2019) – review With the 92nd Academy Awards behind us, 2019 in film is officially over. To put things in bigger perspective, the 2010s decade in film is officially over. So, what happened?.
- STAR WARS: EPISODE IX – THE R. The Star Wars franchise is the largest franchise in the movie industry. The movies and books that have spun the tale of the war of Jedi knights versus the Sith Empire has.
- TERMINATOR: DARK FATE (2019) –. Judgment Day was upon us on August 29th, 1997. Thankfully, T-800 'Model 101' (Arnold Schwarzenegger) saved the day by going back in time from the year 2029 and.
- MALEFICENT: MISTRESS OF EVIL (2019). Why, it's Maleficent! What does she want here? She's back, that's what she wants here! Academy Award® Winner Angelina Jolie returns as one of Disney's.
- JAY AND SILENT BOB REBOOT (2019) &#. I'm Chris… and these are my hetero-life-mates Brad and Alex. It's been 13 years since 'Clerks II (2006),' the last installment of cult-hit.
- JOKER (2019) – review Superhero fan or not… you know the name. You know the character. You know the costume. You know his motives. You know his laugh. Romero… Nicholson….
- RAMBO: LAST BLOOD (2019) – re. Sylvester Stallone is one of the biggest film legends of all-time. Star of United Artists' 'Rocky (1976),' Stallone infamously wrote the script in a period.
- AD ASTRA (2019) – review 2019 is coming to a close. Technology, especially cell phones and computers, has evolved more than ever. The 2020s decade, the future, is almost upon us. Following a very.
- HUSTLERS (2019) – review When one thinks of successful low budget films, they think of 'Paranormal Activity (2009)' or 'The Blair Witch Project (1999)' or 'Clerks.
- IT: CHAPTER TWO (2019) – revi. The terror, which would not end for another two years – if IT ever did end – returned, so far as I know or can tell, with a sequel. The infamous clown, Pennywise.
It was after Bill's first review that he introduced me to Watts Up? The company's meters claim accuracy to within +/- 1.5%, log data to internal flash, and sell for relatively low prices. This device watts up is equivalent to kill-a-watt plus 'sitting around watching for hours', so you can measure how much electricity is used by an appliance in a given amount of time. The device has 5 modes: watts, watt hours, time, cost, volts, amps. @Rob Watts Recently I switched to the Dave. And have question about usb vs optical and RFI thing. I've done many test switching between USB and optical using the same streamer (raspberry pi with hifiberry digi+pro - battery powered) And my observations are: - USB sounds a bit more colorful (?) sound pops out more, has more of WOW effect. Watts Up With That? (WUWT) is a blog promoting climate change denial that was created by Anthony Watts in 2006.The blog predominantly discusses climate issues with a focus on anthropogenic climate change, generally accommodating beliefs that are in opposition to the scientific consensus on climate change.Contributors include Christopher Monckton and Fred Singer as guest authors.
Abstract
The purpose of the present paper is to review the Watts up? Pro AC power meter. Evaluations of the meter's reliability for measuring energy consumption by consumer electronics yielded acceptable levels of reliability. Implications and limitations for the use of this product in behavior analytic research and practice are discussed.
The automation of permanent product recording is useful for both researchers and practitioners.
Electronic or mechanical recording of data tends to be efficient, unobtrusive, and reliable. A product measure that is related to many human behaviors is the consumption of electricity, typically measured in watts or kilowatt hours.
Permanent product recording is a commonly used measure of behavior that involves the analysis of behavioral outcomes. Such forms of data collection have been used in the assessment and treatment of self-injury (; ) and stereotypy (). This form of data collection may be automated through interactions with a mechanical or electrical device, or from the tangible outcomes produced by a target response (Kahng, Invargsson, Quigg, Seckinger, & Teichman, 2011). By virtue of its simplicity and reliability, permanent product recording is a highly efficient means of data collection (Riley-Tillman, Kalberer, & Chafouelas, 2005). One form of permanent product recording involves mechanically collected data generated as an outcome of specific target behaviors. Such automated recording is advantageous in behavior analytic practices due to the (a) relative ease in training its use (Riley-Tillman et al., 2005), (b) unobtrusive nature that prevents reactivity (e.g., ), and (c) high degree of accuracy and reliability associated with this technique (Johnston & Pennypacker, 2009; ).
Automated recording has been used in practice since the conception of applied behavior analysis (see ). For example, Hayes and Cone used meter readers to record energy consumption within conservation studies (1981). Another example of automated recording comes from Van Houten and colleagues () who used an automated data logger that recorded automobile driver responses such as seatbelt closure and brake use in a study aimed to promote safe driving behaviors.
School- and clinic-based practitioners have also profited from such automated data collection. Automatic data collection has become commonplace in the behavioral treatment of drug addiction (see Silverman, Kaminski, Higgins, & Brady, 2011). Specifically, researchers and clinicians can efficiently obtain information on drug or substance use through breathalyzers or urinalysis equipment. Such equipment automatically quantifies traces of substances found in the behavioral product. In occupational settings, Sigurdsson and colleagues propose that automated data collection of noise violations may aid in the identification of target behaviors in therapeutic workplaces . School-based issues have been measured using automated means of data collection, as well. For example, used an automated sound recording device on school buses to measure the frequency and duration of noisy outbursts. These data subsequently informed a successful behavior change procedure. Classroom-based practitioners also rely on automated recording using computer applications or completed work materials. have used similar strategies in classroom contexts.
The above examples highlight only a handful of the many articles that have used automated permanent product recording to efficiently document behavior. As technology has advanced, behavior analysts are presented with an ever-growing set of tools to automate data collection. This product review will focus on a device that measures energy consumption by appliances, tools, and objects that plug into electrical outlets for their power supply. We believe that such a product in the marketplace may enhance behavior analytic service delivery in many domains. First, behavior analysis offers practical solutions to issues of environmental sustainability (). By recording electrical energy consumption, behavior analysts can help promote a sustainable future through conservation. Second, behavior analysts in organizational behavior management can assist agencies by identifying cost-cutting energy conservation approaches (e.g., ). Third, consultants in human service or workplace settings are often charged with identifying outcome measures for behaviors that involve or can be monitored by electrical devices (e.g., Panos & Freed, 2007; ). Having an automated electrical energy monitoring device offers a new means of data collection to make behavioral observations more accurate, efficient, and unobtrusive. Finally, many behavior analysts are intimately involved in organizational quality assurance teams (e.g., Christian, 1981). By monitoring electrical energy consumption, behavior analysts can identify whether staff are accessing computers/televisions during inappropriate times, or failing to run devices (e.g., microwaves, washers/dryers) when scheduled to do so (i.e., a procedural fidelity issue).
Our product review highlights one exemplary electrical energy monitoring device, the Watts up? Pro. This product review will consist of (a) a product description, (b) numerous empirical demonstrations of the reliability and utility of the device, and (c) suggestions for ways to integrate the Watts up? Pro in behavior analytic services. We believe that this device—and others like it—offers behavior analysts a novel and efficient means of collecting data on responses that consume electricity.
Product Description and Specifications
The device of interest in this product review is the Watts up? Pro AC power meter (model 99333; http://www.wattsupmeters.com/) from Electronic Educational Devices, Inc. (see Figure 1). Several similar products are available for purchase.1Table 1 describes various features of the Watts up? Pro in comparison to other commercially available products. While the Watts up? Pro does not feature WiFi data transfer capability, we believe the relative cost and other attributes make it the most attractive option. The WiFi capability may also be moot in settings where privacy policies or information technology restrictions prohibit noninstitutional or unencrypted wireless devices transmitting data. Such restrictions are in place at our university setting, necessitating reliance on non-WiFi energy measurement devices. An Internet accessible version of Watts up? is available (Watts up? .net) from http://www.wattsupmeters.com for $235.95. The Watts up? .Net model uses a built-in web server, circumventing the need for WiFi connectivity (note, however, that it is also WiFi compatible). The Watts up? Pro was selected for its features; primarily, efficiency of data collection in organizational contexts and relative cost (retails for $130.95). Readers interested in learning more about the Watts up? Pro device are encouraged to consult the Watts up? reviews page at https://www.wattsupmeters.com/secure/products page for both technical (i.e., engineering) nonempirical reviews and links to multimedia demonstrations and discussions of the device. The Watts Up? Pro device has also been evaluated by Consumer Reports Magazine 2009 and found to be reliable and valid for everyday consumer needs. This product review constitutes the first rigorous examination of the reliability of the device for use in time-series analyses of energy consumption, similar to what would be necessary in behavior analytic studies of energy conservation.
Photograph of the Watts up? Pro meter.
Table 1
Comparison of Energy Measurement Systems' Features. Features and Information Were Obtained via Product Websites.
Specifications
The Watts up? Pro meter receives electricity through a 2 m U.S. standard grounded electric cable, and appliances measured are plugged into a standard grounded outlet on the meter. Nineteen variables related to energy consumption are measurable including: watts, power factor, volt amps, watt hours (a cumulative measure of energy consumption), duty cycle, line frequency, energy cost, volts, amps, and elapsed time. Data can be viewed on an LCD screen or exported to a computer in spreadsheet format via a USB cable. The manufacturer-reported accuracy of measurement is ± 1.5% + 0.3 W.2 According to the manufacturer's documentation, the accuracy of amp and power factor readings is reduced at usage loads of less than 60 W. Wattage readings are accurate within the range specified above 0.5 W. The meter's energy cost metric is programmable to calculate fiscal costs associated with appliance usage by entering the local price (in U.S. dollars [$]) per kilowatt hour (kWh). Data are automatically recorded at user-defined intervals in increments of one second. Depending on the number of variables being recorded, the meter can store between 1,000 records (recording all variables once per logging interval) and 32,000 records (recording watts only). Data are retrieved with a supplied computer program that features the ability to automatically graph data. The data displayed in this product review were not graphed using the meter's integral graphing software. Rather, we used a third party spreadsheet and graphing program to depict all of our empirical data. Data from the meter can be easily exported in a format compatible with Microsoft® Excel or other specialized graphing software. The program is also used to select the variables recorded, the logging interval, and price per kWh (see Figures 2 and and33).
Screenshots of sample menus available in the Watts up? Pro software. Menus depicted in the figure are Rates (left panel), Interval (middle panel) and Logged Items (right panel).
Screenshots of the graphing (left) and data output (right) of the Watts up? Pro software.
Empirical Evaluations of the Product
To validate the Watts up? Pro AC power meter for use in research and practice, several empirical evaluations were conducted to assess the reliability and consistency with which the meter is capable of recording energy consumption. For the purposes of the present review, watts (W) and watt hours (Wh) were chosen as a common measure of energy consumption. The cost of electricity is determined, typically, in units of kilowatt hours. In each evaluation, the meter was connected to a common consumer electronic product. The meters were set to log both watts and watt hours in intervals of 1 s. During reliability testing, to ensure the meters were measuring precisely the same consumption of electricity, two meters were placed in a daisy-chain configuration (see Figure 4). That is, the device being measured was connected to a meter, which was connected to a second meter. The logs of power consumption in watts for both meters were compared for reliability using the partial-agreement within intervals method.
Diagram of daisy-chained testing apparatus set-up for reliability testing. The top diagram represents the configuration for Evaluation I and the bottom configuration for Evaluations II and III. To ensure each meter was measuring the same power consumption during testing, two meters were chained together. Dimensions not to scale.
Reliability of instantaneous watts with 1 s intervals was calculated by first dividing the smaller reading (y1) by the larger (y2) for each interval. Those interval reliabilities were then summed and divided by the total number of intervals (x) to find the average and multiplied by 100. To accommodate for the manufacturer's declared accuracy tolerance, the range of wattage consumed was calculated for each meter (meter reading ±1.5% + 3 counts). If the ranges for both meters overlapped, it was scored as an agreement. In the case that the ranges did not overlap, partial agreement was calculated between the closest values. Further, the manufacturer specifies that watt readings are not accurate below loads of 0.5 W. As a result, if both meters measured 0.8 W (0.5 W + 1.5% + 3 counts) or less, it was considered an agreement.
For watt hours, due to the fact that the meters recorded it as a cumulative measure, data were first transformed to reflect increases in watt hours consumed from one interval to the next. To do this, watt hours from the immediately preceding interval were subtracted from those of the target interval. Three interval lengths were used to determine the interval at which the readings were most accurate. For the 1 s interval, the value of the preceding interval was subtracted from the target interval. For the 10 s interval, the last reading in the preceding interval was subtracted from the last reading in the target interval. The transformation was identical to this for the 1 min interval. For all interval lengths, reliability of these differences for each meter was calculated by dividing the smaller value by the larger for each interval, averaging across intervals, and multiplying by 100. If differences for both meters were 0 for a given interval, a value of 1 was used for that interval when averaging. Additionally, the total agreement algorithm (smaller final reading divided by the larger final reading and multiplying by 100) was used to calculate agreement for total watt hours measured at the conclusion of Evaluation I and II.
Empirical Evaluation I: Reliability of Light Bulb Wattage Measurement
The purpose of the first empirical evaluation was to assess the reliability of the Watts up? Pro meter on monitoring wattage of an appliance with stable rates of watt consumption. Materials included two Watts up? Pro meters, two 40 W incandescent light bulbs (GE extra soft white), one 60 W incandescent light bulb (GE extra soft white), a timer, and one floor lamp listed for use with up to 100 W bulbs. In order to assess the accuracy of the Watts up? Pro meters, two meters concurrently measured the energy consumption of a single light bulb using the daisy-chain configuration depicted in Figure 4. Meters were set to record instantaneous watts and watt hours (a cumulative measure) at 1 s intervals. For each test, a light bulb was first inserted into the lamp; the lamp was plugged into one meter, which was then plugged into the other meter. Tests consisted of two cycles of 5 min 'on' and 'off ' phases, for a total of 20 min. Prior to the start of each test, the lamp was placed in the 'off ' position, the meter was plugged directly into a standard wall outlet, and a timer was started. https://cases-soft.mystrikingly.com/blog/blender-guru-website. After 30 s, the lamp was switched to the 'on' position, which began the test. Sims 4 mac download 2017.
Two tests were conducted with 40 W bulbs. During the first test, Meter A was plugged into the wall, with Meter B plugged into Meter A, and the lamp plugged into Meter B (Arrangement 1). The arrangement of the meters was reversed during the second 40 W bulb test and a new bulb was used (Arrangement 2). One 60 W bulb test was conducted using Arrangement 1.
Results for each test are depicted in Figure 5. The top panel shows the instantaneous watts and watt hours for the 40 W Arrangement 1 test, the middle panel shows these measures for the 40 W Arrangement 2 test, and the bottom panel shows the 60 W Arrangement 1 test. During the first 40 W test, mean watts for Meter A during 'on' and 'off ' phases (40.0 and 0.7, respectively) were slightly higher than for Meter B (39.7 and 0, respectively). Total watt hours were also slightly higher for Meter A at the conclusion of the test; 6.7 Wh, as compared to 6.4 Wh.
Instantaneous watts and watt hours for the 40 W Arrangement 1 (top panel), 40 W Arrangement 2 (middle panel), and 60 W Arrangement 1 (bottom panel) tests measured by Meters A and B. Two meters (black and blue data paths) measured energy consumption simultaneously using the daisy-chain configuration. Instantaneous watts are scaled to the left y-axis. Cumulative watt hours are scaled to the right y-axis.
During the second 40 W test, mean watts for Meter B (41.3) was slightly higher than for Meter A (40.4) during 'on' phases; however, both meters recorded 0 W during each of the 'off ' phases. Total watt hours for both meters was 6.7 at the conclusion of the test. It was anticipated that the meter with both the lamp and the other meter plugged into it (Meter B) would record higher watts and watt hours due to the energy consumed by the additional meter, but this was not always the case. It appeared that Meter B was not as sensitive to very low levels of energy consumption (approximately less than 1 W). However, these differences in measurement were within the specified range of accuracy noted by the manufacturer.
During the 60 W test, mean watts for Meters A and B were approximately equal during 'on' phases (60.3 and 60.4, respectively) and were again slightly higher for Meter A during 'off ' phases (0.7 and 0, respectively). Digital summit promo code. Pieces of eight game. As with the 40 W Arrangement 1 test, total watt hours were slightly higher for Meter A as compared to Meter B (10.1 and 9.9, respectively).
Reliability calculations for watts and watt hours for each of the interval lengths are presented in Table 2. Reliability data for watts have been adjusted to account for the manufacturer's reported error. For the 40 W Arrangement 1 test, reliability of watts was high for 'on' and 'off ' phases, with overall reliability at 99.9%. In general, reliability for watt hours was not as high as for instantaneous watts. During 'on' phases, average watt hour reliability was lowest for the 1 s interval and highest for the 1 min interval length. The opposite was true of average watt hour reliability during the 'off ' phases. Interestingly, the highest average watt hour reliability was obtained with the 10 s interval. Total reliability for watt hours produced the highest level of agreement at 95.5%.
Table 2
Reliability Calculations for Watts and Watt Hours for Each of the Interval Lengths
For the 40 W Arrangement 2 test, adjusted reliability for watts was consistently high across all phases, with overall reliability at 99.9%. Reliability of original data were notably different during 'off ' phases when compared to the previous test (100% versus 0%). This is accounted for by both meters always reading 0 during these phases. Similar to the previous test, average watt hour reliability during 'on' phases was lowest for the 1 s interval and highest for the 1 min interval lengths. Watt hour reliability was 100% across all 'off ' phases. Overall reliability for each interval size was lowest for the 1 s interval and highest for the 1 min interval. Total reliability for watt hours produced the highest level of agreement at 100%.
Best odds in casino. For the 60 W Arrangement 1 test, reliability for watts was similar to that of the previous two 40 W bulb tests, with overall reliability at 99.9%. In general, reliability of watt hours was somewhat lower than the 40 W tests; however, the rank of reliability by interval size was the same as for the 40 W Arrangement 2 test. https://soft-al.mystrikingly.com/blog/netflix-cracked-ipa. Total reliability for watt hours again produced the highest levels of agreement at 98.0%.
Overall, it appears that the two meters were highly reliable in measuring instantaneous watts when adjusting for the manufacturer-specified tolerance. Although the manufacturer also indicated decreased accuracy when measuring lower rates of consumption, our results indicate acceptable agreement when measuring consumption of as little as 40 W. Calculation of watt hour reliability indicates that the degree of reliability fluctuates depending on the interval size used. Given that energy consumption on small scales such as those used these calculations (1 s, 10 s, and 1 min intervals) are not likely to depict meaningful differences in energy consumption and that moment-to-moment energy readings showed high levels of agreement, it is recommended that total reliability be used for assessing reliability for watt hours.
Empirical Evaluation II: Computer Power Consumption Measurement
To evaluate the reliability and consistency with which the meter is able to record power consumption of desktop computers, two Watts up? Pro meters were connected to a power strip supplying both a desktop computer and monitor using the daisy-chain configuration. The computer used for the reliability test was a Dell® Optiplex 380 computer with a Dell® 17-inch wide-aspect LCD monitor. The energy consumption, both in instantaneous watts and cumulative watt hours, of the computer and monitor were measured during several states representing common daily usage of a computer (i.e., CPU and monitor on, monitor in standby, CPU and monitor in standby, and both powered down). Each state was measured twice for an interval of 10 min. First, a 1 min baseline phase was conducted with both computer and monitor off. The test began by simultaneously powering on both the CPU and monitor. After the first 10 min phase had elapsed, the monitor was placed in standby mode for the next interval. Next, both CPU and monitor were placed in standby mode. Finally, the monitor and CPU were powered down. Each of these states was then repeated.
The results of the evaluation are presented in Figure 6. During both start-up phases, a spike in power consumption was observed followed by a gradual stabilization. Power consumption dropped significantly when the monitor was placed in standby mode. When both the CPU and monitor were in standby mode, power consumption decreased to approximately 2 W. A slight difference was observed between standby mode and powered off. During the second progression through the power states, similar levels of watts, and similar slopes of the cumulative record of watt hours were obtained. The consistency of level and slope provided by the meter suggests that these data could be used to determine the state of the CPU and monitor.
Power consumption by a desktop computer and monitor during various power states. Two meters (black and blue data paths) measured energy consumption simultaneously using the daisy-chain configuration. Instantaneous watts are scaled to the left y-axis. Cumulative watt hours are scaled to the right y-axis.
Reliability for the measurement of watts was 99.7%. The reliability for watt hours varied greatly depending on the interval size selected. Reliability for 1 s, 10 s, and 1 min interval lengths was 88.8%, 87.8%, and 76.1%, respectively. This pattern differed from the first empirical evaluation, with the highest reliability obtained with 1 s intervals and the lowest with 1 min intervals. Although reliability for watt hours was lower, the percentages reported are slightly misleading given that the difference between readings did not exceed 0.1 watt hours during the evaluation and the final reading for both meters was equal (total count reliability of 100%). Overall, it appears that the Watts up? Pro meters can be reliably used to measure the energy consumption of computers. These results support the use of the instrument in practical applications designed to track energy consumption or to measure computer usage through a product measure.
Empirical Evaluation III: Long-term Power Consumption Monitoring of a Computer
Empirical Evaluation III: Long-term Power Consumption Monitoring of a Computer
The purpose of the third evaluation was to use the Watts up? Pro meter to evaluate the energy consumption of a desktop computer over a longer period of time to mimic its potential use in field settings. In order to collect data on reliability, two meters were connected to a desktop computer in a daisy-chain configuration as in Evaluation II. The desktop computer used in this evaluation was a Dell® Vostro 230 equipped with a 17 in wide-aspect LCD monitor and a standard keyboard and mouse. The computer was located in an office and was the primary work computer for the first author.
The computer was used regularly Monday through Friday from 9:00 am to 4:00 pm to simulate a typical workday. The dependent measure for the present evaluation was energy consumption of the computer, as measured by cumulative watt hours consumed. This was measured under two conditions at 12 h intervals. During the baseline condition, the computer's power management settings were left at the default settings. The monitor was set to enter standby mode after 15 min of inactivity and the computer was set to never enter standby mode. Once a stable pattern of energy consumption was observed, the intervention phase was implemented by changing the power management settings such that the monitor entered standby mode after 10 min of inactivity and the computer entered standby mode after 20 min of inactivity. No other changes were made to the arrangement.
The results of the evaluation are presented in Figure 7. During both baseline phases, relatively high, yet stable levels of energy consumption were observed with slightly more variability during the second baseline phase. Within the intervention phase (i.e., power saving functions), power consumption decreased substantially during the remaining workdays (i.e., Wed., Thurs., Fri.) with another substantial decrease during the weekend. The average proportional difference in energy consumption was calculated for workdays and weekend days during baseline and intervention. For workdays, energy consumption decreased by an average of 65.2% when in intervention compared to baseline. For weekend days, energy consumption decreased by an average of 88.8% when in intervention compared to baseline. Although large decreases in energy consumption were observed during the entire intervention phase, results suggest that the lowest amount of energy consumption occurred during the intervention weekend while the computer was not in use.
Power consumption by a desktop computer and monitor over several days. Two meters (black and blue data paths) measured energy consumption simultaneously using the daisy-chain configuration.
Reliability for the measurement of watt hours during baseline and intervention was 99.8% and 88.8%, respectively. Overall, these results suggest this device would be beneficial to use in field settings, such as an office or group home. Further, it may be the case that the energy consumption was lowest when the computer was not in use (i.e., nights and weekends). Given kilowatt hours are the measure of energy costs, the current evaluation demonstrates the potential savings a simple default intervention may provide. For example, the difference in energy cost between baseline and intervention in the current evaluation would have yielded a savings of $.05 per day for weekdays and $.07 per day for weekend days. Although these represent only modest savings, the evaluation consisted of only one desktop computer. The benefits of a default intervention for energy savings, facilitated by the use of the Watts up? Pro, might be more fully realized when scaled up to an institution or organization operating many thousands of computers and other electronic devices.
Conclusion
This product review evaluated the Watts up? Pro AC power meter for its ability to serve as an automatic means of data collection. The meter's primary contribution to the practice of behavior analysis is its ability to record AC-powered devices' consumption of energy when plugged into a standard AC wall outlet. Behavior analysts may be interested in using this meter to covertly monitor energy consumption; an outcome related to various human behaviors. The findings of this product review suggest that the meter is sensitive to fluctuations in energy consuming behaviors. Moreover, the reliability of this meter is adequate for behavior analytic purposes.
Despite the apparent utility of the meter for various behavioral observations, it is not without limitation. First, it is clear that the meter is not sufficiently reliable when recording very low rates of energy consumption. This limitation is clearly disclosed to the consumer in the device manual. To circumvent issues related to reliability measurement, behavior analysts should assume that any measures of 1 W or less be scored as 0. Second, there appears to be meter-specific differences in precision. This limitation is also noted by the manufacturer and our evaluations suggest that the precision tolerance described by the manufacturer is accurate. To offset the issue of precision, behavior analysts should rotate the meters between conditions, if multiple meters are being used. For example, if the energy consumption of multiple appliances is being assessed simultaneously, the behavior analyst should rotate the meters among the appliances to counterbalance the duration that each meter was used with each appliance. This counterbalancing procedure would thus wash out the slight differences in measurement over time. If multiple conditions are studied (e.g., baseline and energy savings interventions), the rotation should be counterbalanced within each condition. If only one meter is necessary, the issue of precision is moot given the relatively stable degree of precision within each meter. In sum, the limitations of the meter are most problematic when simultaneously comparing energy consumption across multiple meters and when very low rates of energy are to be recorded for extended durations. These issues are easily offset, however, using the suggestions described above.
In conclusion, we recommend that behavior analysts use the Watts up? Pro AC power meter for discreetly collecting energy consumption data. Despite the Watts up? Pro's inability to remotely monitor real-time data, requiring manual retrieval of results, the meter appears well-suited for recording changes in energy consumption over time. To maximize the utility of the meter without sacrificing reliability, we recommend using logging intervals of 1 min or longer, measuring only watts and cumulative watt hours (this also permits more extended bouts of undisturbed data collection; i.e., 10,863 records, which translates to one week of 1 min records [if set to 1 s logging, this translates to roughly 3 hr of data]). When analyzing the Watts up? Pro AC power meter data, we recommend reliance on the cumulative watt hour metric as the primary measure, as this presents a gross measure of total energy use over time. Watt hours also directly map onto electricity pricing structures for billing purposes. As a secondary unit of measurement, watts may allow the behavior analyst to hone in on rates of energy consumption at snapshots in time. The watt measures could be beneficial for assessing rate of energy consumption associated with various contextual manipulations or variables.
In an era of increased accountability for services rendered to clients and economic uncertainty, the demand for behavior analytic solutions is unlikely to decrease during the foreseeable future. Whether covertly measuring appliance use during overnight shifts in a group home or attempting to monitor energy consumption to promote environmental sustainability and cost savings in office settings, behavior analysts have an affordable, reliable, and automatic means of data collection via the Watts up? Pro AC meter. Through further use and investigation, we believe that the utility and applicability of the Watts up? Pro AC power meter as a behavioral measurement tool will become increasingly more apparent. We hope that the readers of Behavior Analysis in Practice will consider adding this meter to their measurement 'toolbox,' whether for self-monitoring, institutional behavior management, or assessment of conservation intervention efficacy.
Footnotes
The authors thank Kevin Luczynski for his insightful comments and recommendations on a previous draft of this manuscript.
1 The authors claim no conflict of interest in the selection of the product in this review. Device selection was based on comparisons of features and relative cost between several available options (see Table 1).
2 The manufacturer reported error as ±1.5% + 3 counts. A count is the least significant figure in the metric, which in the context of the current review would result in a wattage accuracy of ±1.5% + 0.3 W. A reading of 100 W would be accurate with a tolerance of ±1.8 W. Counts have been converted to watts in-text.
References
Benefit Watts Up Review
- Bekker M. J., Cumming T. D., Osborne N. K. P., Bruining A. M., McClean J. I., Leland L. S. Encouraging electricity savings in a university residential hall through a combination of feedback, visual prompts, and incentives. Journal of Applied Behavior Analysis. 2010;43:327–331.[PMC free article] [PubMed] [Google Scholar]
- Christian W.P. Programming quality assurance in residential rehabilitation settings: A model for administrative work performance standards. Journal of Rehabilitation Administration. 1981;5(1):26–33.[Google Scholar]
- Crossland K. A., Zarcone J. R., Schroeder S., Zarcone T., Fowler S. Use of an antecedent analysis and a force sensitive platform to compare stereotyped movements and motor tics. American Journal of Mental Retardation. 2005;110:181–192. [PubMed] [Google Scholar]
- Energy monitors. Consumer Reports Magazine. 2009, March. Retrieved from http://www.consumerreports.org/
- Grace N. C., Thompson R., Fisher W. W. The treatment of covert self-injury through contingencies on response products. Journal of Applied Behavior Analysis. 1996;29:239–242.[PMC free article] [PubMed] [Google Scholar]
- Greene B. F., Bailey J. S., Barber F. An analysis and reduction of disruptive behavior on school buses. Journal of Applied Behavior Analysis. 1981;14:177–192.[PMC free article] [PubMed] [Google Scholar]
- Hayes S. C., Cone J. D. Reducing residential electrical energy use: Payments, information, and feedback. Journal of Applied Behavior Analysis. 1977;10:425–435.[PMC free article] [PubMed] [Google Scholar]
- Hayes S. C., Cone J. D. Reduction of residential consumption of electricity through simple monthly feedback. Journal of Applied Behavior Analysis. 1981;14:81–88.[PMC free article] [PubMed] [Google Scholar]
- Heward W. L., Chance P., editors. The human response to climate change: Ideas from behavior analysis [Special section] The Behavior Analyst. 2010;33:145–206. (Eds.) [PMC free article] [PubMed] [Google Scholar]
- Iwata B. A., Pace G. M., Kissel R. C., Nau P. A., Farber J. M. The Self-Injury Trauma Scale: A method for quantifying surface tissue damage caused by self-injurious behavior. Journal of Applied Behavior Analysis. 1990;23:99–110.[PMC free article] [PubMed] [Google Scholar]
- Johnston J. M., Pennypacker H. S. Strategies and tactics of behavioral research. 3rd ed. New York, NY: Routledge; 2009. [Google Scholar]
- Kahng S., Ingvarsson E. T., Quigg A. M., Seckinger K. E., Teichman H. M. Defining and measuring behavior. In: Fisher W. W., Piazza C. C., Roane H. S., editors. Handbook of applied behavior analysis. New York, NY: Guilford; 2011. pp. 113–131. (Eds.) [Google Scholar]
- Kazdin A. E. Unobtrusive measures in behavioral assessment. Journal of Applied Behavior Analysis. 1979;12:713–724.[PMC free article] [PubMed] [Google Scholar]
- Kelly M. B. A review of the observational data-collection and reliability procedures reported in the Journal of Applied Behavior Analysis. Journal of Applied Behavior Analysis. 1977;10:97–101.[PMC free article] [PubMed] [Google Scholar]
- Panos R., Freed T. IEEE International Conference on Automation Science and Engineering. Piscataway, NJ: Institute of Electrical and Electronics Engineers; 2007. The benefits of automatic data collection in the fresh produce supply chain; pp. 1034–1038. [Google Scholar]
- Riley-Tillman T. C., Kalberer S. M., Chafouleas S. M. Selecting the right tool for the job: A review of behavior monitoring tools used to assess student response-to-intervention. The California School Psychologist. 2005;10:81–91.[Google Scholar]
- Sigurdsson S. O., Aklin W., Ring B. M., Needham M., Boscoe J., Silverman K. Automated measurement of noise violations in the therapeutic workplace. Behavior Analysis in Practice. 2011;4(1):47–52.[PMC free article] [PubMed] [Google Scholar]
- Silverman K., Kaminksi B. J., Higgins S. T., Brady J. V. Behavior analysis and treatment of drug addiction. In: Fisher W. W., Piazza C. C., Roane H. S., editors. Handbook of applied behavior analysis. New York, NY: Guilford; 2011. pp. 451–471. (Eds.) [Google Scholar]
- Strang H. R., George J. R., III Brief technical report— Clowning around to stop clowning around: A brief report on an automated approach to monitor, record, and control classroom noise. Journal of Applied Behavior Analysis. 1975;8:471–474.[PMC free article] [PubMed] [Google Scholar]
- Van Houten R., Malenfant J. E. L., Austin J., Lebbon A. The effects of a seatbelt-gearshift delay prompt on the seatbelt use of motorists who do not regularly wear seatbelts. Journal of Applied Behavior Analysis. 2005;38:195–203.[PMC free article] [PubMed] [Google Scholar]