Counting down the days, and learning Artificial Intelligence
by shiningsunnyday, Nov 22, 2017, 4:05 PM
So my first college interview was today.
It went okay, I guess - apart from talking about math, I mostly talked about being grateful and AoPS. The only downside was that since I've been feeling a bit weak the past few weeks on the Keto diet, I decided last minute to load up on carbs the day before. Unexpectedly, my body didn't adjust to well and the result was two trips to the bathroom this morning but hopefully the interviewer mistook me sitting on the edge of my seat as being enthusiastic so hopefully all is good.
Anyways, let's talk about school.
School is mostly dull, as is to be expected. In retrospect, I realize that high-intensity math training has made me more inclined to be interested in studying things that deliver a sense of thrill rather than being a well-rounded academic (the kind who strokes his beard and says hmmmm a lot) person, kind of like how a competitive swimmer may find it hard to swim at leisure. My grades apart from English are all A's (though most are borderline), but nevertheless I don't have much motivation to try to hard (talked about this many times on my blog before). In fact, I think I've reached the part where cynicism has been replaced by nonchalance - where I just don't really care but still do it anyways cause whatever.
Having 3 free periods has been sort of a blessing - I've had the chance to sleep in, manage two classes on AI (Machine Learning & Deep Learning and Neural Networks) on Coursera and my university-level online math course, and watch YouTube hours a day. This, I realize, has been really conducive to my creative energies - most nights I put myself to sleep listening to some talk by Elon Musk or some other tech magnate on space exploration, AI, self-driving cars, startups, bitcoin and blockchain, quantum computing, blah blah, and I feel never have I learned so much with such little effort.
I've also gotten a bit into competitive programming (USACO, CodeForces and TopCoder) for fun. I kind of bombed my debuts on the latter two so let's not about that...
Learning about AI has been so fascinating. Though most people envision a future where robots rule the world, we don't realize how prevalent machine learning has already become in our lives. From ads to recommendation systems (music, YouTube, Amazon), Uber, prices, machine learning is really about optimizing algorithms based on past experiences. A Chess-playing AI, for example, functions by associating a probability of winning with each position of the board, so that whatever state the game is in, it evaluate the moves it can make with the highest associated probability of winning. How it does so is complex, but the concept is pretty amazing - essentially, the output (probability) is a function of many factors, such as the level of threat by the queen, how far the game has been played, possible number of attacks. Each in turn is the composition of another layer of data - number of targets the queen can take out, the number of pieces left on the board, other combinations,etc. Basically, we can organize these data into layers of abstraction and show the interdependence of one layer on the other by a neural network until at the bottommost layer we have the input - the current position of the board. Essentially a neural network is represented by a weighted-edge graph, and the edge shows how heavily one factor is dependent on another, such as how much is the level of threat by the queen dependent on the number of pieces in its line of attack. By using supervised learning, we can train the AI over pre-played game move sequences, or better yet, by making it play against itself to optimize its strategy. With data mining (millions of inputted or self-generated data sets), the weight of the edges (and thus the neural network as a whole) adjusts to become the most accurate by the regression to the mean. This is akin to optimizing multivariable functions with partial derivatives, still mostly basic math, which is what I've been exposed to so far.
What I find more interesting is unsupervised learning - the basis behind some recent tech perks like iPhoneX's face recognition. Essentially, an AI is given some examples, and teaches itself to organize, distinguish, and excavate the data that users can make use of. For example, AI can recognize cats from images based on only the pixels of the image - RGB values, or can separate out the singing and non-singing voice from an audio file, recognize spam e-mails from non-spam. A cool article recently shows how an AI can come up with one-sentence descriptions of any image it sees - like "The bright-colored sunflower plant in a garden." Specifically what features the AI does and looks for with the data its given isn't the result of hard-coded algorithms, but, referring to above, neural networks.
There's really infinite applications one can think of, so learning about it has been really fun.
Since I'll have plenty of time in second semester, perhaps I can design a neural network to make a virtual AI girlfriend texting app to keep me company (I'm only half-joking).
Anyhoo, only 12 more days before finals week (the same week early app comes out) when I'll hopefully be done submitting all my apps, then the gooooood life begins.
Anyhow let's finish with a CodeForces I solved but haven't implemented up yet (the sol is different from the official but I proved it works yay):
Solution
It went okay, I guess - apart from talking about math, I mostly talked about being grateful and AoPS. The only downside was that since I've been feeling a bit weak the past few weeks on the Keto diet, I decided last minute to load up on carbs the day before. Unexpectedly, my body didn't adjust to well and the result was two trips to the bathroom this morning but hopefully the interviewer mistook me sitting on the edge of my seat as being enthusiastic so hopefully all is good.
Anyways, let's talk about school.
School is mostly dull, as is to be expected. In retrospect, I realize that high-intensity math training has made me more inclined to be interested in studying things that deliver a sense of thrill rather than being a well-rounded academic (the kind who strokes his beard and says hmmmm a lot) person, kind of like how a competitive swimmer may find it hard to swim at leisure. My grades apart from English are all A's (though most are borderline), but nevertheless I don't have much motivation to try to hard (talked about this many times on my blog before). In fact, I think I've reached the part where cynicism has been replaced by nonchalance - where I just don't really care but still do it anyways cause whatever.
Having 3 free periods has been sort of a blessing - I've had the chance to sleep in, manage two classes on AI (Machine Learning & Deep Learning and Neural Networks) on Coursera and my university-level online math course, and watch YouTube hours a day. This, I realize, has been really conducive to my creative energies - most nights I put myself to sleep listening to some talk by Elon Musk or some other tech magnate on space exploration, AI, self-driving cars, startups, bitcoin and blockchain, quantum computing, blah blah, and I feel never have I learned so much with such little effort.
I've also gotten a bit into competitive programming (USACO, CodeForces and TopCoder) for fun. I kind of bombed my debuts on the latter two so let's not about that...
Learning about AI has been so fascinating. Though most people envision a future where robots rule the world, we don't realize how prevalent machine learning has already become in our lives. From ads to recommendation systems (music, YouTube, Amazon), Uber, prices, machine learning is really about optimizing algorithms based on past experiences. A Chess-playing AI, for example, functions by associating a probability of winning with each position of the board, so that whatever state the game is in, it evaluate the moves it can make with the highest associated probability of winning. How it does so is complex, but the concept is pretty amazing - essentially, the output (probability) is a function of many factors, such as the level of threat by the queen, how far the game has been played, possible number of attacks. Each in turn is the composition of another layer of data - number of targets the queen can take out, the number of pieces left on the board, other combinations,etc. Basically, we can organize these data into layers of abstraction and show the interdependence of one layer on the other by a neural network until at the bottommost layer we have the input - the current position of the board. Essentially a neural network is represented by a weighted-edge graph, and the edge shows how heavily one factor is dependent on another, such as how much is the level of threat by the queen dependent on the number of pieces in its line of attack. By using supervised learning, we can train the AI over pre-played game move sequences, or better yet, by making it play against itself to optimize its strategy. With data mining (millions of inputted or self-generated data sets), the weight of the edges (and thus the neural network as a whole) adjusts to become the most accurate by the regression to the mean. This is akin to optimizing multivariable functions with partial derivatives, still mostly basic math, which is what I've been exposed to so far.
What I find more interesting is unsupervised learning - the basis behind some recent tech perks like iPhoneX's face recognition. Essentially, an AI is given some examples, and teaches itself to organize, distinguish, and excavate the data that users can make use of. For example, AI can recognize cats from images based on only the pixels of the image - RGB values, or can separate out the singing and non-singing voice from an audio file, recognize spam e-mails from non-spam. A cool article recently shows how an AI can come up with one-sentence descriptions of any image it sees - like "The bright-colored sunflower plant in a garden." Specifically what features the AI does and looks for with the data its given isn't the result of hard-coded algorithms, but, referring to above, neural networks.
There's really infinite applications one can think of, so learning about it has been really fun.
Since I'll have plenty of time in second semester, perhaps I can design a neural network to make a virtual AI girlfriend texting app to keep me company (I'm only half-joking).
Anyhoo, only 12 more days before finals week (the same week early app comes out) when I'll hopefully be done submitting all my apps, then the gooooood life begins.
Anyhow let's finish with a CodeForces I solved but haven't implemented up yet (the sol is different from the official but I proved it works yay):
Slightly harder than CodeForces Round 447 Div. 2 C wrote:
In a dream Marco met an elderly man with a pair of black glasses. The man told him the key to immortality and then disappeared with the wind of time.
When he woke up, he only remembered that the key was a sequence of positive integers of some length n, but forgot the exact sequence. Let the elements of the sequence be
He remembered that he calculated
for every
and put it into a set
.
Note that even if a number is put into the set
twice or more, it only appears once in the set.
Now Marco gives you the set
and asks you to help him figure out the initial sequence. If there are many solutions, print any of the smallest of them. It is also possible that there are no sequences that produce the set
, in this case print 
When he woke up, he only remembered that the key was a sequence of positive integers of some length n, but forgot the exact sequence. Let the elements of the sequence be




Note that even if a number is put into the set

Now Marco gives you the set



Solution
First, observe every term
of the initial sequence must be among
else we take
contradiction. If we let
must be in the original set, as otherwise we can't generate it as a gcd. Now let's implement the following algorithm of
steps that checks if each of the remaining
elements of
belong to the original set.
1. At step
, let
and
be the current subset of
that must be in the original set. 
2. Check if among the gcd's of any subset of
one can produce cur. If yes, then
and we can skip cur when we reach its step. If not, then
as otherwise we can't produce cur as a gcd.
3. If cur was added above, then iterate through gcd(cur, subset of original
) and if even one of these elements can't be found among
then this is a contradiction - cur can't be added. Return
Else proceed with 
This produces the shortest possible sequence and proves if there are no such sequences.
But the run time is
at each step which is trash and guaranteed time limit exceeded on CodeForces but hey I'm a mathematician not a programmer.








1. At step





2. Check if among the gcd's of any subset of



3. If cur was added above, then iterate through gcd(cur, subset of original




This produces the shortest possible sequence and proves if there are no such sequences.
But the run time is

This post has been edited 3 times. Last edited by shiningsunnyday, Nov 23, 2017, 1:31 AM