Performance diagnosis systems are defined as detecting abnormal performance phenomena and play a crucial role in cloud applications. An effective performance diagnosis system is often developed based on artificial intelligence (AI) approaches, which can be summarized into a general framework from data to models. However, the AI-based framework has potential hazards that could degrade the user experience and trust. For example, a lack of data privacy may compromise the security of AI models, and low robustness can be hard to apply in complex cloud environments. Therefore, defining the requirements for building a trustworthy AI-based performance diagnosis system has become essential. This article systematically reviews trustworthiness requirements in AI-based performance diagnosis systems. We first introduce trustworthiness requirements and extract six key requirements from a technical perspective, including data privacy, fairness, robustness, explainability, efficiency, and human intervention. We then unify these requirements into a general performance diagnosis framework, ranging from data collection to model development. Next, we comprehensively provide related works for each component and concrete actions to improve trustworthiness in the framework. Finally, we identify possible research directions and challenges for the future development of trustworthy AI-based performance diagnosis systems.