Persevero
Well-Known Member
Hey it's been a while since I've posted something on these forums, hope all of you have a good New Year's!
One of the things I've been doing rather than posting here is studying for one of the hardest exams in my undergraduate course, Econometrics.
One of the past exams I've been doing has this exercise which reminded me of one of Grumpy Cat's posts in her statistics thread, which has to do with the thread subject: The exam had a regression model which was trying to explain the number of crimes in a city with the following variables: expenditure on public safety, number of officers on the field, unemployment rate and a binary variable that was 1 if the majority of the population was under the age of 18.
It also gave the regression's results, and the funniest part was that the coefficients were saying the more a city spends on public safety and the more cops it has on patrol, the greater the number of crimes. To a human this makes sense after a little contemplation: What's really going up is the number of crimes being reported and handled by law enforcement, people aren't deciding to commit more crimes because of their being more police officers. It's impossible to measure the number of crimes that aren't reported.
In Grumpy Cat's case there were two completely unrelated things: Deaths by falling off beds and consumption of cheese . Can't remember the details right now because I can't find the thread.
Anyway more and more of society's management is being automated, and there are many science fiction depictions of the future which have a super computer running the whole thing - what I want to know is how is it possible to teach an AI that data isn't the end-all be-all? How do you teach an AI to weigh data based on its context, and which direction the implications go?
One of the things I've been doing rather than posting here is studying for one of the hardest exams in my undergraduate course, Econometrics.
One of the past exams I've been doing has this exercise which reminded me of one of Grumpy Cat's posts in her statistics thread, which has to do with the thread subject: The exam had a regression model which was trying to explain the number of crimes in a city with the following variables: expenditure on public safety, number of officers on the field, unemployment rate and a binary variable that was 1 if the majority of the population was under the age of 18.
It also gave the regression's results, and the funniest part was that the coefficients were saying the more a city spends on public safety and the more cops it has on patrol, the greater the number of crimes. To a human this makes sense after a little contemplation: What's really going up is the number of crimes being reported and handled by law enforcement, people aren't deciding to commit more crimes because of their being more police officers. It's impossible to measure the number of crimes that aren't reported.
In Grumpy Cat's case there were two completely unrelated things: Deaths by falling off beds and consumption of cheese . Can't remember the details right now because I can't find the thread.
Anyway more and more of society's management is being automated, and there are many science fiction depictions of the future which have a super computer running the whole thing - what I want to know is how is it possible to teach an AI that data isn't the end-all be-all? How do you teach an AI to weigh data based on its context, and which direction the implications go?