
Research Article
Scalable Smart Contracts for Linear Regression Algorithm
@INPROCEEDINGS{10.1007/978-3-031-31420-9_2, author={Syed Badruddoja and Ram Dantu and Yanyan He and Abiola Salau and Kritagya Upadhyay}, title={Scalable Smart Contracts for Linear Regression Algorithm}, proceedings={Blockchain Technology and Emerging Technologies. Second EAI International Conference, BlockTEA 2022, Virtual Event, November 21-22, 2022, Proceedings}, proceedings_a={BLOCKTEA}, year={2023}, month={4}, keywords={Blockchain dApp Smart Contract Artificial Intelligence Multiple Linear Regression Arbitrum Ethereum}, doi={10.1007/978-3-031-31420-9_2} }
- Syed Badruddoja
Ram Dantu
Yanyan He
Abiola Salau
Kritagya Upadhyay
Year: 2023
Scalable Smart Contracts for Linear Regression Algorithm
BLOCKTEA
Springer
DOI: 10.1007/978-3-031-31420-9_2
Abstract
Linear regression algorithms capture information from previous experiences and build a cognitive model to forecast the future. The information and the cognitive model representing the history of predicting future outputs must be reliable so that expected results are trusted. Furthermore, the algorithms must be explainable and traceable, making the learning process meaningful and trackable. Blockchain smart contracts boost information integrity, providing trust and the provenance of distributed ledger transactions that support such requirements. Smart contracts are traditionally developed to perform simple transactions with integer operations. However, developing learning algorithms such as linear regression with smart contracts mandates complex computation involving floating-point operations, which are not supported by smart contracts. Moreover, smart contract transactions are expensive and time-consuming. In this work, we propose a novel implementation of smart contracts for linear regression algorithms with fraction-based computation that can train and predict on the Ethereum blockchain. Our smart contract-based training and prediction technique with Solidity programming language produced a similar mean square error to the scikit-learn-based prediction model. Moreover, our design strategy saves training costs for linear regression algorithms through off-chain computations with an optimistic roll-up solution. The off-chain training and on-chain prediction strategy demonstrated in our work will help academic and industry researchers to develop cost-effective distributed AI applications in the future.