
View/ Open
Neagu_et_al_Artificial_Intelligence_XXXV.pdf (526.0Kb)
Download
Publication date
2018Rights
© Springer Nature Switzerland AG 2018. Reproduced in accordance with the publisher's self-archiving policy. The final publication is available at Springer via https://doi.org/10.1007/978-3-030-04191-5_17.Peer-Reviewed
YesAccepted for publication
2018
Metadata
Show full item recordAbstract
Decision tree is a simple but powerful learning technique that is considered as one of the famous learning algorithms that have been successfully used in practice for various classification tasks. They have the advantage of producing a comprehensible classification model with satisfactory accuracy levels in several application domains. In recent years, the volume of data available for learning is dramatically increasing. As a result, many application domains are faced with a large amount of data thereby posing a major bottleneck on the computability of learning techniques. There are different implementations of the decision tree using different techniques. In this paper, we theoretically and experimentally study and compare the computational power of the most common classical top-down decision tree algorithms (C4.5 and CART). This work can serve as part of review work to analyse the computational complexity of the existing decision tree classifier algorithm to gain understanding of the operational steps with the aim of optimizing the learning algorithm for large datasets.Version
Accepted ManuscriptCitation
Sani HM, Lei C and Neagu D (2018) Computational complexity analysis of decision tree algorithms. In: Bramer M, Petridis M (eds) Artificial Intelligence XXXV. SGAI 2018. Lecture Notes in Computer Science. Springer: Cham. 11311: 191-197.Link to Version of Record
https://doi.org/10.1007/978-3-030-04191-5_17Type
Conference paperae974a485f413a2113503eed53cd6c53
https://doi.org/10.1007/978-3-030-04191-5_17