You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is a high-level overview of computing wind speeds from the hot film voltages.
There is example R code Steve used to calibrate wind tunnel data, mostly to check the gain that could be used in the ADC. The python implementation will be based on that. As I understand it, the calibration curve is based on a physical relation, thus only the coefficients for that curve needed to be calculated. (Here is one "reference" which explains that.) From discussions with Jielun and what I remember from Steve, the calibrations will be updated regularly, such as every hour. So every hour will have different coefficients. My thoughts at the moment are to implement this in these steps:
Start with writing code to read the hot film voltages from netcdf and the corresponding sonic wind speed, so they can be compared (plotted) and eventually calibrated. Read the sonic wind speeds from the high-rate sonic dataset. Eventually it might be interesting to be able to compare the high-rate sonic data directly against the 20-hz means of the hot film wind speeds, both to see how much variation there is but also to see if we can characterize sampling lag in the DSM. (The hot film samples are timestamped with the GPS pulse-per-second, so they should be much more precise than the sonic timestamps.) And since we may want to use a mean period other than 5 minutes, the high-rate data will allow different averaging periods.
Given a calibration time period, grab all the 5-minute means during that period of both hotfilm voltages and sonic components, then compute the calibration coefficients and store them. Also store some diagnostics about the quality of the fit, to flag time periods which maybe should be ignored. The calibration time period and point period are parameters that would start out as 1 hour and 5 minutes, but we might want to be able to adjust them.
The question of which wind speed to use is asked in #2. If the wind speed has to be derived from the tilt corrected coordinates, that means rotating the $(u,v)$ coordinates by the azimuth offset back to sonic coordinates, so $u$ is positive into and along the sonic boom, perpendicular to the hot film. If the instrument coordinates (without tilt corrections applied) should be used, then that means taking the $u$ component directly from a high-rate dataset in instrument coordinates, which I don't think has been generated yet.
Given the hot film sample-rate voltages and the calibration coefficients, compute hot film wind speeds and write them to netcdf, including the calibration information and diagnostics.
The text was updated successfully, but these errors were encountered:
Related to calibration quality and which wind component should the hot film voltages be calibrated against? #2, the netcdf metadata should include the other mean wind velocity components, and maybe the wind direction, relative to the sonic instrument coordinates. Besides affecting the quality of the calibration, that also provides a way to filter data by how much non-orthogonal wind there was during the calibration period.
This is a high-level overview of computing wind speeds from the hot film voltages.
There is example R code Steve used to calibrate wind tunnel data, mostly to check the gain that could be used in the ADC. The python implementation will be based on that. As I understand it, the calibration curve is based on a physical relation, thus only the coefficients for that curve needed to be calculated. (Here is one "reference" which explains that.) From discussions with Jielun and what I remember from Steve, the calibrations will be updated regularly, such as every hour. So every hour will have different coefficients. My thoughts at the moment are to implement this in these steps:
Start with writing code to read the hot film voltages from netcdf and the corresponding sonic wind speed, so they can be compared (plotted) and eventually calibrated. Read the sonic wind speeds from the high-rate sonic dataset. Eventually it might be interesting to be able to compare the high-rate sonic data directly against the 20-hz means of the hot film wind speeds, both to see how much variation there is but also to see if we can characterize sampling lag in the DSM. (The hot film samples are timestamped with the GPS pulse-per-second, so they should be much more precise than the sonic timestamps.) And since we may want to use a mean period other than 5 minutes, the high-rate data will allow different averaging periods.
Given a calibration time period, grab all the 5-minute means during that period of both hotfilm voltages and sonic components, then compute the calibration coefficients and store them. Also store some diagnostics about the quality of the fit, to flag time periods which maybe should be ignored. The calibration time period and point period are parameters that would start out as 1 hour and 5 minutes, but we might want to be able to adjust them.
The question of which wind speed to use is asked in #2. If the wind speed has to be derived from the tilt corrected coordinates, that means rotating the$(u,v)$ coordinates by the azimuth offset back to sonic coordinates, so $u$ is positive into and along the sonic boom, perpendicular to the hot film. If the instrument coordinates (without tilt corrections applied) should be used, then that means taking the $u$ component directly from a high-rate dataset in instrument coordinates, which I don't think has been generated yet.
Given the hot film sample-rate voltages and the calibration coefficients, compute hot film wind speeds and write them to netcdf, including the calibration information and diagnostics.
The text was updated successfully, but these errors were encountered: