쿠키 기본 설정 선택

당사는 사이트와 서비스를 제공하는 데 필요한 필수 쿠키 및 유사한 도구를 사용합니다. 고객이 사이트를 어떻게 사용하는지 파악하고 개선할 수 있도록 성능 쿠키를 사용해 익명의 통계를 수집합니다. 필수 쿠키는 비활성화할 수 없지만 '사용자 지정' 또는 ‘거부’를 클릭하여 성능 쿠키를 거부할 수 있습니다.

사용자가 동의하는 경우 AWS와 승인된 제3자도 쿠키를 사용하여 유용한 사이트 기능을 제공하고, 사용자의 기본 설정을 기억하고, 관련 광고를 비롯한 관련 콘텐츠를 표시합니다. 필수가 아닌 모든 쿠키를 수락하거나 거부하려면 ‘수락’ 또는 ‘거부’를 클릭하세요. 더 자세한 내용을 선택하려면 ‘사용자 정의’를 클릭하세요.

Improving your results

포커스 모드
Improving your results - Amazon Lookout for Equipment
이 페이지는 귀하의 언어로 번역되지 않았습니다. 번역 요청

Amazon Lookout for Equipment is no longer open to new customers. Existing customers can continue to use the service as normal. For capabilities similar to Amazon Lookout for Equipment see our blog post.

Amazon Lookout for Equipment is no longer open to new customers. Existing customers can continue to use the service as normal. For capabilities similar to Amazon Lookout for Equipment see our blog post.

To improve the results, consider the following:

  • Did unrecorded maintenance events, system inefficiencies, or a new normal operating mode happen during the time of flagged anomalies in the test set? If so, the results indicate those situations. Change your train-evaluation splits so that each normal mode is captured during model training.

  • Are the sensor inputs relevant to the failure labels? In other words, is it possible that the labels are related to one component of the equipment but the sensors are monitoring a different component? If so, consider building a new model where the sensor inputs and labels are relevant to each other and drop any irrelevant sensors. Alternatively, drop the labels you're using and train the model only on the sensor data.

  • Is the label time zone the same as the sensor data time zone? If not, consider adjusting the time zone of your label data to align with sensor data time zone.

  • Is the failure label range inadequate? In other words, could there be anomalous behavior outside of the label range? This can happen for a variety of reasons, such as when the anomalous behavior was observed much earlier than the actual repair work. If so, consider adjusting the range accordingly.

  • Are there data integrity issues with your sensor data? For example, do some of the sensors become nonfunctional during the training or evaluation data? In that case, consider dropping those sensors when you run the model. Alternatively, use a training-evaluation split that filters out the non-functional part of the sensor data.

  • Does the sensor data include uninteresting normal-operating modes, such as off-periods or ramp-up or ramp-down periods? Consider filtering those out of the sensor data.

  • We recommend that you avoid using data that contains monotonically increasing values, such as operating hours or mileage.

프라이버시사이트 이용 약관쿠키 기본 설정
© 2025, Amazon Web Services, Inc. 또는 계열사. All rights reserved.