Bhimavarapu, U., Battineni, G., Chintalapudi, N.: Improved optimization algorithm in LSTM to predict crop yield. Computers 12(1), 10 (2023) Article Google Scholar https://www.kaggle.com/datasets/akshatgupta7/crop-yield-in-indian-states-dataset https://www.kaggle.com/datasets/patelris...
Consequently, a user-friendly tool, 'Crop Yield Predictor', was developed, rendering the model accessible and practical for on-ground applications in agriculture. This tool effectively translates complex data and algorithms into actionable insights, bridging the gap between advanced machine le...
Crop yield prediction is a crucial aspect of agricultural planning and decision-making. This study utilizes a Kaggle dataset featuring State, Year, Season, Crop, Area, Production, etc., employing extensive data preprocessing and one-hot encoding of categorical features to enhance predictive performance...
Explore and run machine learning code with Kaggle Notebooks | Using data from Crop Yield Prediction Dataset
PlantVillage dataset. Retrieved April 04, 2021, from https://www.kaggle.com/emmarex/plantdisease Google Scholar 7 Image processing in OpenCV. (n.d.). Retrieved April 04, 2021, from https://docs.opencv.org/master/d2/d96/tutorial_py_table_of_contents_imgproc.html Page 15 of 16 Google ...
Firstly, we delve into a crop recommendation dataset obtained from Kaggle, consisting of various input attributes such as the pH of the soil, temperature, humidity, and nutrient levels. Leveraging machine learning classification techniques such as Extra Tree Classifier (ETC), Logistic Regression (LR)...
This strategy uses the crop recommendation dataset from Kaggle which is classified using bagging. The classified instances along with the ontology cluster are semantically aligned using spider monkey optimization algorithm from which we get facts after suitable verification. The query is asked by the ...
The dataset for this research is collected from the Kaggle website consisting of 6 different crop types with 11 nutrients. The models are trained and tested with 80% and 20% of the dataset respectively. The results prove Extreme Gradient Boosting followed by Naive Bayes to perform better with ...
Experimental analysis was performed on the dataset collected from Kaggle. The Random Forest algorithm outperforms XG Boost, Decision Tree, and KNN with high accuracy and F1 score of 99.3% and 99.01% respectively. Hyperparameter tuning is additionally performed on XG Boost and Random Forest...
Kaggle uses cookies from Google to deliver and enhance the quality of its services and to analyze traffic. Learn more OK, Got it. Something went wrong and this page crashed! If the issue persists, it's likely a problem on our side. Unexpected end of JSON inputkeyboard_arrow_upcontent_...