Showing posts with label artificial intelligence. Show all posts
Showing posts with label artificial intelligence. Show all posts

Sunday, September 15, 2024

Technological Apocalypse and Anthropocentric Information Security

Impending Technological Apocalypse amidst IR 4.0 and Web 3.0: The Need for Anthropocentric Information Security

As we stand on the precipice of the Fourth Industrial Revolution (IR 4.0) and the dawn of Web 3.0, the cybersecurity landscape is evolving at an unprecedented pace. The convergence of physical, digital, and biological spheres is creating a world of infinite possibilities – and equally boundless vulnerabilities. In this article, we'll explore the potential technological apocalypse looming on the horizon and argue for a shift towards anthropocentric information security to mitigate these risks.


The Perfect Storm: IR 4.0 and Web 3.0

The Fourth Industrial Revolution is characterized by the fusion of technologies that blur the lines between the physical, digital, and biological spheres. Artificial Intelligence, Internet of Things (IoT), robotics, and quantum computing are just a few of the technologies reshaping our world. Simultaneously, Web 3.0 promises a decentralized internet built on blockchain technology, offering increased user autonomy and data ownership.

While these advancements promise unprecedented opportunities, they also present significant security challenges:

  1. Expanded Attack Surface: With billions of connected devices, the potential entry points for cybercriminals have multiplied exponentially.
  2. AI-Powered Attacks: Malicious actors are leveraging AI to create more sophisticated and targeted attacks, outpacing traditional security measures.
  3. Quantum Threat: The advent of quantum computing threatens to render current encryption methods obsolete, potentially exposing vast amounts of sensitive data.
  4. Decentralized Vulnerabilities: While Web 3.0's decentralized nature offers benefits, it also introduces new security challenges, particularly in areas like smart contract vulnerabilities and private key management.

The Impending Technological Apocalypse

The convergence of these factors could lead to a technological apocalypse – a scenario where our increasing dependence on interconnected systems becomes our Achilles' heel. Imagine a world where:

  • Critical infrastructure is held hostage by ransomware attacks at an unprecedented scale.
  • AI-driven deepfakes manipulate financial markets and political landscapes.
  • Quantum computers crack encryption protecting sensitive government and financial data.
  • Decentralized autonomous organizations (DAOs) are hijacked, leading to massive financial losses.

This isn't science fiction – these are real possibilities that security professionals must prepare for.

Man vs. Machine: Real-World Examples

The "Man vs. Machine" scenario is no longer confined to the realm of science fiction. Here are some real-world examples that highlight the growing tension between human control and machine autonomy:

  1. Algorithmic Trading Gone Wrong: In 2010, the "Flash Crash" saw the Dow Jones Industrial Average plummet nearly 1,000 points in minutes due to high-frequency trading algorithms, highlighting the potential for AI to cause significant financial disruption.
  2. Autonomous Vehicle Accidents: The fatal crash involving a Tesla in Autopilot mode in 2016 raised questions about the reliability of AI in critical decision-making scenarios and the appropriate level of human oversight.
  3. AI in Healthcare Diagnosis: IBM's Watson for Oncology was found to make unsafe and incorrect treatment recommendations, demonstrating the risks of over-relying on AI in critical healthcare decisions.
  4. Facial Recognition Misidentification: In 2018, Amazon's Rekognition facial recognition system incorrectly matched 28 members of Congress to criminal mugshots, highlighting the potential for AI bias in law enforcement applications.
  5. Social Media Algorithm Manipulation: The Cambridge Analytica scandal revealed how AI algorithms could be exploited to manipulate public opinion and influence democratic processes.

These examples underscore the need for a human-centered approach to technology development and deployment, especially in high-stakes environments.

The Need for Anthropocentric Information Security

To avert this technological apocalypse, we need a paradigm shift in our approach to information security. Enter anthropocentric information security – a human-centered approach that puts people at the heart of security strategies.

Key principles of anthropocentric information security include:

  1. Human-Centric Design: Security solutions should be designed with human behavior and limitations in mind, making secure practices intuitive and easy to adopt.
  2. Ethical Considerations: As AI and automation play larger roles in security, we must ensure that ethical considerations guide their development and deployment.
  3. Digital Literacy: Invest in widespread digital literacy programs to create a more security-aware population.
  4. Adaptive Security: Develop security systems that can learn and adapt to human behavior, providing personalized protection.
  5. Transparent AI: Ensure AI-driven security solutions are explainable and transparent, allowing human oversight and intervention.
  6. Privacy by Design: Incorporate privacy considerations from the ground up in all technological developments.
  7. Resilience Training: Prepare individuals and organizations to respond effectively to security incidents, fostering a culture of cyber resilience.

AI Ethical Considerations

As AI becomes increasingly integrated into our security infrastructure, it's crucial to address the ethical implications:

  1. Bias and Fairness: AI systems can perpetuate and amplify existing biases. For example, facial recognition systems have shown higher error rates for minorities and women. We must ensure AI security systems are trained on diverse datasets and regularly audited for bias.
  2. Transparency and Explainability: The "black box" nature of many AI algorithms poses a challenge for security. We need to develop AI systems that can explain their decision-making processes, especially when those decisions impact human lives or rights.
  3. Accountability: As AI systems become more autonomous, questions of liability arise. Who is responsible when an AI-powered security system makes a mistake? We need clear frameworks for AI accountability in security contexts.
  4. Privacy: AI systems often require vast amounts of data to function effectively. We must balance the need for data with individuals' right to privacy, implementing strong data protection measures and giving users control over their information.
  5. Human Oversight: While AI can process information faster than humans, it lacks human judgment and contextual understanding. We must maintain meaningful human oversight in critical security decisions.
  6. Autonomous Weapons: The development of AI-powered autonomous weapons raises serious ethical concerns. We need international agreements to regulate or prohibit such systems.
  7. Job Displacement: As AI takes over more security tasks, we must consider the impact on human security professionals. Retraining programs and new job creation should be part of our security strategies.

Implementing Anthropocentric Information Security

To implement this approach, organizations and policymakers should:

  1. Invest in human-centered security research and development.
  2. Incorporate behavioral sciences into security strategies.
  3. Develop comprehensive digital literacy programs.
  4. Create regulatory frameworks that mandate ethical AI and privacy considerations in technology development.
  5. Foster collaboration between technologists, ethicists, and policymakers.
  6. Establish ethics review boards for AI security systems.
  7. Develop international standards for AI ethics in cybersecurity.

Conclusion

As we navigate the complexities of IR 4.0 and Web 3.0, the threat of a technological apocalypse looms large. The real-world examples of "Man vs. Machine" scenarios highlight the urgent need for a more balanced approach. By shifting towards an anthropocentric approach to information security and carefully considering the ethical implications of AI, we can harness the power of these technological revolutions while mitigating their risks. It's time to put humans at the center of our security strategies – our digital future depends on it.

 

Read My New Book


"ManusCrypt: Designed For Mankind" is a groundbreaking work by Prashant Upadhyaya that explores the intersection of humanity and technology in the digital age. This book delves into the concept of 'ManusCrypt,' a term that likely combines 'Manus' (Latin for 'hand,' symbolizing human touch) and 'Crypt' (suggesting encryption or protection).

<<<Click here to buy it!>>> 

Thursday, September 2, 2021

Von Westendorp Approach Using Python

 

Von Westendorp Approach Using Python

(Artificial Intelligence Series)


Strategic pricing is a critical process of Product Management, when it comes to launching a new product. There are many methods to pricing your product and one of the most popular ones is the Von Westendorp approach. The beauty of this approach is that it provides a glimpse in the form of a “Price Sensitivity Meter”. Once the graph is plot, we get the following price points at the intersections:

a.       OPP (Optimal Price Point) – The ideal price where the consumer is most willing to pay.

b.      IPP (Indifference Price Point) – The price where the consumer is least willing to pay.

c.       PMC (Point of Marginal Cheapness) – The price beyond which the consumer will consider the product to be too cheap and might not consider buying.

d.      PME (Point of Marginal Expensiveness) – The price beyond which the consumer will consider the product to be too expensive and might not consider buying.

From the above four price points, we get the RAI (Range of Acceptable Index); which can be used to price our product. All said and done, let’s get our hands dirty now!

A survey is conducted to assess the likely price points with questions like:

·         At what price would you consider the product to be too expensive and out of reach? (Too Expensive)

·         At what price would you consider the product to be expensive but still worth buying? (Fleecing)

·         At what price would you consider the product to be a cheap and worth buying? (Bargain)

·         At what price would you consider the product to be too cheap that quality would be doubtful? (Too Cheap)

All the responses are collected and the dataset is arranged in the following manner:


Please note that the first two columns are sorted in descending order while the last two are sorted in the ascending manner. Using this dataset, one can plot the Von Westendorp graph in excel itself. While this might seem quite easy and does not require any other dependencies like a Python IDE; I personally detest doing it in excel. I will explain the reasons at the end of this article.

I’ve used Miniconda CLI, Jupyter notebook and .csv data format for this project. You may choose other IDE and tools that you are comfortable with like Spyder, NoSQL, etc.

Launch a fresh notebook and import the necessary libraries. Now, load the data and check the shape as shown below:


Next step is to remove the intransitive price preferences* and compute the cumulative frequencies, as shown below:


Once the above is done, we need to define a table where all the array values can be tabulated.

 

Now, we need to compute the results and get the values, as shown below:


The last step is to plot the graph:





The output graph will look like this:


So, that’s it, pretty simple, isn’t it? Well, no. It wasn’t that easy when I did it for the first time. Let me share my learnings with you. Prior to plotting it in Python, I did it with excel. I’m presenting a comparison table of the values that I got:

One can easily note that the values arrived at via Python are higher than those from excel. The reason being the following:

·         In Excel, it will accept all price points from the dataset as it is, in order to plot the graph while in Python, I have the freedom to “remove intransitive price preferences*”. If I had skipped this step, values from Excel would match those from Python.

·         It is imperative to remove the intransitive price preferences as not doing so would present a skewed graph, as well as, generate incorrect values.

·         As a result, my data shape dropped to 16 from 39; meaning entire 23 rows were invalidated.

The obvious question that would arise in the minds of data analysts and machine learning enthusiasts is – Why not scale or normalize the dataset?

The answer is plain simple here – applying normalization to the dataset would replace the suspect values (intransitive, duplicate, etc.) with the average or mean. Since, this is akin to “changing the respondent answers”, it would amount to gross manipulation of data and render the purpose of survey useless.

Thus, the key takeaways from this exercise are:

1.       Statistical inferences are better accomplished with Python than with Excel

2.       Respondents are prone to error and one might need to drop off the incorrect answers

3.       Adequate sample size is a must to ensure that there are enough data points to suffice for the inference

4.       Scaling or normalization is not a ‘one size fits all’ solution and must be applied with sufficient caution

5.       Intent needs to be preserved even if it is at the cost of the content

Note: *intransitive price preferences – the price points that are not linear but are concurrent. This can happen where a respondent might choose the same price point for more than one question. For e.g. Rs. 8000 selected as ‘bargain value’ as well as ‘too cheap value’.

P.S. You can contact the author to know more about this article. The sample python code is available at https://github.com/southpau79/humintonline .

For those lazy to read & have scrolled to find the end of page, here's the video:



Saturday, July 17, 2021

OPINION MINING & JUDGEMENT ERRORS

OPINION MINING & JUDGEMENT ERRORS

(Artificial Intelligence Series)

Video link for this article:


Machine learning is a great tool for developing predictive models. One of the most interesting applications is its use in opinion mining. Opinion mining too is a broad field with numerous use cases ranging from product reviews to personality assessment. It is often argued by advocates of AI that human judgment is always biased and machines can perform better in this area. However, the bone of contention here is this – Who is training those machines? Well, without getting into the complexities involved let me bring your attention towards the psychological / technical basis of judgment and the error-prone systems that we face each day.

Products

How do we judge any given product? Without understanding this, training a machine to analyze huge chunks of data and generate results – be it a Recommender System or a UX dashboard; the value to effort ratio remains below unity. We have tried everything from the basic “queuing theory” to “gamification” and even the “hyper-game theory”, but still struggle to find the golden ratio. In an ideal scenario, the following equation will hold good:

When emotions overtake the intellect fueling our experience for the cause or reason-to-believe that goes beyond the expectation of the product itself, we make the decision to buy. Whether it is a need or a want, whether it is justified or not, whether it comes with a positive or negative feeling (e.g. vaccination is received out of fear), people buy something if and only if the perceived value is greater than the perceived effort. Great marketers and advertisers know this too well and they constantly exploit the emotions to evoke the decision to buy. 

Just look at the film industry and compare the movie reviews from the pre-covid and post-covid eras. Until two years ago, when the public could watch their choice of movie inside a theatre, their review would mostly largely positive where even mediocre movies would gain 3.5-4 stars on a scale of 5. Post-covid, when people are able to watch movies mostly on the OTT platform, their reviews have become more critical where everyone is seeking for quality content amidst binge-watching. Many starrer movies are being rated below 3 stars and seemingly underdog movies are being rated higher. The reason for this shift is that the pandemic has bitten into the overall “movie-going” experience, thereby letting the watchers use their intellect more rather than being swayed by their emotions that were previously being fueled with anticipation via teasers, trailers, promotions, campaigns and advertisements.    

Thus, no amount of “sentiment analysis” of movie review data can predict anything with accuracy; without understanding the basic premise or context. 

Processes

What happens before any output is generated? If we do not understand the process behind a given outcome, we are playing a “trial-and-error” game. Obviously one can’t play a game where the rules, the stakes and the quitting time remain unknown. This is one of the main reasons that domain knowledge is an absolute pre-requisite for delivering any kind of technological solution viz. machine learning in the current context. Understanding a process involves going beyond the process itself, as shown below:

With the kind of technological advancement we have today, one can create a robotic bartender and automate the process of serving drinks at the local pub. But, without considering a “process approach” that includes the process, the environment, the human factor and other variables, one will end up with nothing more than a “talking vending machine”. 

Training a machine simply by mimicking what a human being does, does not add any value to the process per se. If your algorithm can either augment or compliment what a human being or an existing machine can, then it is definitely worth it to take the leap forward. This is the main reason as to why the adoption of “Conversational AI” like chat-bots, digital assistant, etc. has not picked up on a massive scale. I know that many AI enthusiasts and experts will disagree with me and probably refer to several hypotheses and modern theories like AGI, ASI, Swarm Intelligence, etc.; but the harsh truth is that we are centuries far from it.            

People 

Why do people perceive something the way they do? The world we are living in today is primarily system-driven, be it the rule of democracy or online shopping. That’s what most of us are made to believe. But, if you do a reality check at the grassroots level, everything depends on people. End of the day, it is people who affect both processes and products, either directly or indirectly. Organizations with the best processes and products in place can fail if it is not support by the right kind of people. In terms of leadership, people factor is of prime importance. 

For the sake of our understanding, if we ignore the Fortune 500 list; all other companies including startups and SMEs heavily and solely depend on their leadership structure for their success, growth and / or survival. So, when it comes to selection of leaders or rewarding top performers, people make huge judgmental errors. Similar to management of products and processes, there are numerous theories for people management like BEI (Behavioral Event Interviewing), Psychometrics, Adam’s Equity Theory, Hertzberg’s Motivation Theory, Jay Hall Conflict Management Theory, etc. 

Most of these judgmental errors can be attributed to the misunderstanding of two most basic theories: 

Dunning-Kruger Effect – A hypothetical cognitive bias stating people with low ability at a task overestimate their ability. 

Imposter Syndrome – A psychological pattern of perceived fraudulence, where an individual doubts his / her skills, talents, or accomplishments despite having external evidence of competence.

It is simply illogical to compare Guy Martin (famous pit stop wheel changer) with Michael Schumacher (famous F1 racer). If one were to get into their minds, Guy would be found suffering from the Imposter Syndrome and Michael Schumacher facing the Dunning-Kruger Effect. Now, if one were to reverse their roles, what would be the effect? While this may seem to be an extreme example; such scenarios occur often in our day-to-day routines. An example that everyone can relate to is your appraisals. More than the actual ability and past performance, more weightage is given to attitude. Those who rate themselves high during self-appraisal are the ones who usually get the highest rating in the final appraisal. The fact remains that the lot who hold themselves in high esteem and exuberate confidence are actually part of the Dunning-Kruger Effect, while those who are self-doubtful lie somewhere in the Imposter Syndrome region. 

Thus when it comes to people analytics or consumer research, even the best of experts can go wrong when the primary data is based on opinions prone to judgment errors.  

The following quote sums up my point too well:

Summarizing the whole article, I can say that the following factors need to be accounted for before embarking on a decision-making journey based on analytics:

  • Value created or projected (product-centric)

  • Variables both known and unknown (process-based)

  • Viewpoint of everyone involved (people-centric)

I help improve people, processes and products.. Reach out to me to know more. 🙂 

Thursday, November 5, 2020

DIGITAL TRANSFORMATION (DX) V/S FAKE TRANSFORMATION (FX)

 

KNOW THE DIFFERENCE

For those who don't like to read, here's the video:


In this new era that is ushering in the 4th industrial revolution, almost every organization is seeking to adopt “Digital Transformation”. But today’s world is filled with fake transformation projects. A mere technology upgrade or change management do not account for authentic transformation. Organizations and leaders both in the public and private sector have been caught unaware with the “delusion of digital transformation”. Many of the projects that CEOs and other leaders undertake are akin to moving from traditional systems to paperless office or shifting from legacy to cloud infrastructure. They are victims of this grand delusion and will eventually lead their organizations to an early grave.

Digital Transformation (DT or DX) is the adoption of digital technology to transform services or businesses, through replacing non-digital or manual processes with digital processes or replacing older digital technology with newer digital technology. Digital solutions may enable - in addition to efficiency via automation - new types of innovation and creativity, rather than simply enhancing and supporting traditional methods. (Source: https://en.wikipedia.org/wiki/Digital_transformation)

Digital Transformation is application of digital capabilities to processes, products, and assets to improve efficiency, enhance customer value, manage risk, and uncover new monetization opportunities. (Source: https://www.cio.com/article/3199030/what-is-digital-transformation.html)

Researchers have analyzed some digital transformation strategy examples and trends of recent years.

Eventually, some of the key predictions were:

·         By 2023, investments in digital transformation will grow from 36% in 2019 to over 50% of all information and communication technology investments.

·         Investments in direct digital transformations are rapidly growing at an annual rate of 17.5%. They’re expected to approach $7.4 trillion over 2020-2023.

·         By 2024, artificial intelligence-powered companies will respond to their customers and partners 50% faster than their peers.

(Source: IDC FutureScape: Worldwide Digital Transformation 2020 Predictions, October 2019)

The idea is to create whole new products, services or business models, not just improve old ones. Companies that go through digital transformation are said to be more agile, customer centric and data driven. DX can have different blueprints depending upon the company and industry. But, in all cases it needs to follow these basic steps:

In terms of the People-Process-Technology framework, a definite “cultural shift” is a prerogative to achieve DX in its truest sense. Businesses must learn to push boundaries, experiment, and accept the associated failures. This potentially involves abandoning well-established processes for new ones – ones that are often still being defined.

For real DX, one needs to separate from the herd involved in FX.

Look at the chart given below:

It doesn’t matter if you find yourself in quadrant one, but never stray from the course that turns out to be an FX instead of genuine DX. A few good examples of DX are given below for you to understand and get inspired:

Anheuser-Busch (AB) InBev has looked at how digital transformation can be applied all through the business while retaining focus on serving its consumers. They have achieved it via the following –

·         Developed a mobile application called B2B with an inbuilt algorithm that makes specific replenishment suggestions, creating opportunities for sales staff to talk about new brands and products with store owners.

·         Created a tech innovation lab, Beer Garage, to explore ways that artificial intelligence (AI), machine learning (ML) and the internet of things (IoT), among other technologies can be used to improve experiences for consumers and retailers alike.

DHL is well known for its excellent stock management and supply chain but that did not stop them from improving. Their stock management and supply chain systems are easy to use and automated, but they want to take things to the next level. For this they decided to team up with Ricoh and Ubimax and –

·         Developed application for smart glasses. By pairing smart glasses with these applications, it can be used for reading bar codes, streamline pickup and drop off and reduce the chances of errors. Their stock price doubled from 20 euros in 2016 to 40 euros in 2018.

Honeywell has helped many companies improve their digital presence and capabilities. In 2016 the company began transforming itself digitally by introducing new technologies like data-centric, internet-connected, offerings and devices. They have leveraged digital solutions like this –

·         Using new digitized internal solutions and customer data, the company now offers its customers more technology solutions and has reinvented its industrial process control. As a result, in the past four years, Honeywell’s stock per share price has gone from $95 to $174.

As a leader what you must be doing is listed below:

·         Develop competency – Invest in talent and upgrading skills of employees in the organization. Digital and analytics skills are very critical for DX and go a long way in bringing about real transformation.

·         Plan and prioritize – Assess the current scenario and develop a roadmap. An efficient plan makes a solid foundation for any achievement. Pick relevant themes and prepare a business case.

·         Commitment – Absolute commitment along with appropriate investment is crucial to bring about DX. Always look at the tangible as well as intangible benefits of the project.

A snapshot for a typical CDO is provided below:

Hence, the message is to steer clear of all types of fake transformation projects and drive real digital transformation projects that truly alter the arena. I can conclude this article with an allegorical anecdote:

Resume statement (DX)

            Reality (FX)

Surpassed targets by 60% through implementing energy saving initiatives and loss prevention strategies via leading digital transformation projects across the organization, thereby contributing to the bottom line of the company.


Note: This blog including all articles are copyrighted by the author. Wherever, external content is used, the relevant sources are posted in separate links or the images itself. 

Sunday, July 19, 2020

Pitfalls to avoid for effective model building


Watch this video about this article:


It is of utmost importance that the most optimized model is deployed for production and this is usually done via model performance characteristics like accuracy, precision, recall, f1 score, etc. To achieve this, we may employ various methods like feature engineering, hyper-parameter tuning, SVMs, etc.

However, before optimizing any model, we need to choose the right one in the first place. There are several factors that come into play before we decide upon the suitability of any model like:

a.     Has the data been cleaned adequately?

b.     What methods have been used for data preparation?

c.      What feature engineering techniques are we going to apply?

d.     How do we interpret and handle the observations like skewness, outliers, etc.?

Here, we will focus on the last factor mentioned above where most of us are prone to commit mistakes.

It is a standard practice to normalize the distribution by reducing the outliers, dropping certain parameters, etc. before feature selection. But, sometimes one might need to take a step back and observe –

a.     How is our normalization affecting the entire dataset and

b.     Is it gearing us towards the correct solution within the given context?

Let us examine this premise with a practical example as shown below.

Problem statement: Predicting concrete cement compressive strength using artificial neural networks

As usual, the data has been cleaned and prepared for detailed analysis before going for model selection and building. Please note that, we will not be addressing the initial stages in this article. Let us have a look at some of the key steps and observations as described below.

1.     Dropping outliers for normalization

An initial exploratory data analysis and visualization depicts the overall distribution of the target column “strength” -


As seen above, the data distribution is quite sparse with multiple skewness, both positive and negative. Further analysis reveals the following:


The following are the observations:

a.   Cement, slag, ash, courseagg and fineagg display huge differences indicating possibility of outliers

b.     Slag, ash and coarseagg have their median values closer to either 1st quartile or minimum values while both slag and fineagg have maximum values as outliers.

c.      Target column "strength" has many maximum values as outliers.

Replacing outliers for Concrete Cement Compressive Strength with any other value will beat the purpose of the data analysis i.e. develop a best fit model that gives an appropriate mixture with “maximum compressive strength”. Hence, it is good to replace outliers with mean values only for other variables as per the analysis and leave the target column as it is.

2.     Dropping variables to reduce skewness

Before applying feature engineering techniques, we need to look at correlation of the variables as shown below:





Observations based on our analysis:

a.     There is no high correlation between any of the variables

b.     Compressive strength increases with amount of cement

c.      Compressive strength increases with age

d.     As fly-ash increases the compressive strength decreases

e.     Strength increases with addition of Superplasticizer

Observations based on domain knowledge:

a.     Cement with low age requires more water for higher strength i.e. older the cement, more the water it requires

b.     Strength increases when less water is used in preparing it i.e. more of water leads to reduced strength

c.      Less of coarse aggregate along with less of slag increases strength

We can drop the variable slag only while the rest need to be retained.  

If we were to drop certain variables solely based on observed correlation in the given dataset, we would end up with a model having pretty high accuracy but at the same time it would be considered at best a “paper model” i.e. not practicable in the real world. Hence, certain amount of domain knowledge either directly or through consultation with a subject-matter expert goes a long way in avoiding major pitfalls while model building. 

The above example pretty much sums up, what we can call as “bias” (pun intended) that most of us can be prone to whether we are having a technical edge or a domain edge. Hence, it is a good practice to rethink the methods applied vis-à-vis the big picture.

Source:  The data for this project is available in file https://archive.ics.uci.edu/ml/machine-learning-databases/concrete/compressive/

Reference:  I-Cheng Yeh, "Modeling of strength of high performance concrete using artificial neural networks," Cement and Concrete Research, Vol. 28, No. 12, pp. 1797-1808 (1998).

 


Books By Dr. Prashant A U

  🔐 "ManusCrypt: Designed for Mankind" by Prashant A Upadhyaya 🔐 🚀 Revolutionizing Information Security for the Human Era! 🚀 ...