A key part to good grades and an all-around good time in a specific class is to keep yourself up to date with the material and make sure you understand it. Pop quizzes are the bane of our existence. That’s why study groups are probably the most powerful resource a student can get their hands on; they are beneficial because they not only bring several minds together to learn different aspects of a difficult subject, but they also help build strong bonds between classmates. Everyone learns differently, and that’s why study groups are such life savers — you might find someone who can explain it to you in your learning style, or you can help them! It’s so important that you go to the group prepared, to enhance studying effectiveness. Here is our list of things you should bring to your next study session!

1. Electronics — Laptop/tablet, phone, etc.

As much as we’d hate to admit it, we depend on our electronics, especially when it comes to studying. Bring something you can use the Mighty Google with, but don’t forget to mute the social media and YouTube notifications!

2. Chargers for all previously listed devices.

Nothing says “absolute chaos” like running out of battery life on your laptop in the middle of an emergency cram session.

3. Books.

Not all of your answers will be on the internet, and not all of your homework or projects will, either. Be sure to bring every book or material packet you have for that specific class, otherwise you’ll be passing around the one book that someone brought on accident.

4. Writing utensils.

Can’t get anywhere without them, can you? Bring an array of pens, pencils, and highlighters with you — the more the merrier.

5. Snacks!

The key to a good study session is to keep the snacks satisfying, but not so heavy that you get sleepy. Avoid big meals with lots of bread. A sweet and salty trail mix, a bag of grapes, peanut butter and any sort of dipping food… the possibilities are endless.

6. Drinks.

Just like snacks, you don’t want to go crazy. Limit your warm drinks, or drinks with lots of dairy, to a minimum so you don’t feel full and get tired. Energy drinks are your enemy, because they make you crash so hard.

7. Headphones/ear buds.

A lot of people study with music, but a lot of people don’t, so bring your headphones just in case you need a little extra soothing musical motivation, but don’t want to bother your study buddy with your music choice.

8. Fidget tools.

So this one is a little open for interpretation, because there are a lot of things that can be used to ease anxiety, ADD, ADHD, or other fidgeting during study time. It can be gum, a Fidget Spinner, Fidget Cube, PlayDough, stress ball, or even a coin! If you need it, bring it!

Are there any other important items we missed? What helps you study the best? Let us know in the comments!

There’s a myth that’s as old as the idea of school itself that says you have to have a special gift to be good at mathematics; that your brain needs to be one-sided and that math is just plain hard. Well, that’s not entirely true. A lot of students have fears when it comes to numbers, and that’s okay; we just need to learn to embrace it! We have put together a list of reasons to keep on doing math — reasons why it’s not scary and how to make it easier. Keep on reading!

#1. Math is in everything!

Have you ever baked or cooked anything? Maybe you’ve used a ruler to measure out fabric, or a protractor to trace a perfect circle? Adding three-fourths cup of butter to a mountain of chocolate chips and a flour concoction isn’t just the recipe for chocolate chip cookies; it’s math — all of it!

#2. Math shows.

They make a lot — and we mean a LOT — of TV shows for students, to help them grasp concepts they might have otherwise found difficult. Cyberchase is probably one of the biggest ones out there; they had seasons upon seasons of material, and it covered basic math all the way up to Algebra and Geometry.

#3. Ditch the negativity!

We know it’s hard, but when it comes to math and your perception of it, a positive outlook is definitely going to help. Don’t say you’re “bad” at math, because you’re not! Everyone can conquer the subject once they find their learning curve. It could be something as simple as needing a different teaching style and a different teacher, or maybe you’re just more hands on, rather than visual. Experiment. You got this.

#4. Teach someone else!

Ironically, we oftentimes don’t understand something until we need to teach it to someone else. Grab a classmate who’s struggling, or even grab your parent or guardian, and teach them what you learned.

#5. Make a game or a song out of something that needs remembered.

It might seem silly, but if you take time to place a tune along with an equation or a formula, you can recall it much easier than you would if you just forced yourself to memorize it.

Math can be a challenge to some, but it doesn’t have to make you miserble. Are there other ways to make it fun? Let us know in the comments!

There are thousands of ways to study; some do it in chunks with timed and manageable breaks, others cram it all into one hour, and some have zany, yet effective, ways to get the brain juices going. Everyone’s style is different. Whatever way you study, there is oftentimes music involved, right?  That’s because music actually helps you retain more information! It has to do with triggering certain centers of the brain that help you absorb all that scholarly goodness. There are some music types that hinder studying, too, so it’s important to remember the kinds of music that are good for this specific task: lyricless. (This is something that applies to a lot of people, but not to all!) The types of lyricless — or instrumental — music are just about as vast as other genres with lyrics in them, so finding the right one might be hard, but many streaming music apps and websites have pre-made lists for you to browse! Don’t worry; we know it’s overwhelming, so we’re here to offer up our top twenty list of songs that are perfect for study time.

Before we get into that, you should know that you have the ability to choose between instrumental, ambient, and nature noises! Instrumental is pretty self explanatory: you have a song typically played by one instrument — a piano or an acoustic guitar — with gentle rhythms and melodies. (Our list also includes a violin sonata or two.) Ambient music is done almost completely electronically with no real pattern, and is used specifically to enhance emotions. Nature noises are, well, nature noises. That being said, here we go:

#1. Welcome Home by Sigriour

#2. The Watchtower by Sigimund

#3. Iles de la Madeleine by Tommy Berre

#4. Gotham (acoustic) by Claes Nilsson

#5. The Ludlows by Cristopher Varela

#6. Literally anything by Bach.

#7. Serenade: IV by Leanard Bernstein

#8. Road Movies I by John Adams

#9. Scottish Fantasy, OP:46.1 by Max Bruch

#10. Oblivion by Gidon Kremer

#11. Trancemission Four by Delta Rain Dance

#12. See by TENGGER

#13. Walking in a spiral towards th house: Side C by Nivhek

#14. Everything All at Once by G Jones

#15. Badlands Train by Stubbleman

#16. Rhapsody in Green by Mort Garson

#17. Nevergreen by Emancipator

#18. Nest by Justin Nozuka

#19. Second Sun by Bonobo

#20. A Spark, A Beginning by Corre

Note: When it come to nature noises (you’ll notice we don’t have any listed here) that just about anything will do; a thunderstorm, a light rain shower, wind in the grass, crickets, the ocean, a stream… You see where we’re going with this? If you like to study to the music of Mother Earth, then you will just have to pick a sound and go from there! There are no “top favorites” in nature!

Let us know in the comments what songs get you in the studying mood!

By Camille D., Age 17

This article will focus on a method data scientists and programmers use to make data easier to explore, visualize, and interpret data, called principal component analysis (PCA). The explanations in this article assume some background in linear algebra and statistics.

PCA is based on dimensionality reduction: “the process of reducing the number of random variables under consideration by obtaining a set of principal variables,” in other words, transforming a large dataset into a smaller one without extracting too much key information. This process is considered expensive for machine learning algorithms; a little accuracy must be traded for simplicity. Minimizing this cost is part of the job for PCA.

The first step of PCA is standardization, the process that is the least mathematically involved. Standardization takes care of the variances within the initial variables, specifically with regards to their ranges. For example, the value of one variable may lie within the range of 0 to 10, and the value of another within the range of 0 to 1. The variable whose possible value lies between 0 and 10 will carry a greater weight over the second variable, leading to biased results. Mathematically, this can be addressed by subtracting the dataset’s mean from the value of the variable and dividing this result by the set’s standard deviation.

After standardization is performed, the values of each variable will all be within the same range.

Note that standardization is different from normalization in descriptive statistics. Normalization rescales the values into a range from 0 to 1, while standardization rescales the dataset to have a mean of 0 and a standard deviation of 1. Normalization is performed with the following equation:

In almost any case, of course, this will yield a value smaller than 1.

The second step, covariance matrix computation, is where things unfortunately begin to get more complicated. We first must understand the definition of covariance: “a measure of how much two random variables vary together.”

 

**Covariance differs from correlation in that correlation describes how strongly two variables are related, while covariance indicates the extent to which two random variables change with one another. The values of covariance lie between -∞ and ∞, while the values of correlation lie between -1 and 1. Correlations can be obtained only when the data is standardized.

Covariance matrix computation aims to investigate how the variables in the input dataset are related to one another. This is important because it helps detect redundant information that may come from a high correlation between two elements. We compute a covariance matrix to determine these correlations. The covariance matrix is an nn matrix, where nis the number of dimensions, that has entries of all possible covariances within the dataset. For example, for a two dimensional dataset with entries x, y, the covariance matrix is as follows:

A couple notes:

  • Cov(x,x)=Var(x), or the variance of the initial variable.
  • The Cov()operator is commutative, so Cov(x,y)=Cov(y,x), so the upper and lower triangular portions of the matrix are equal.

 

The covariance matrix is simply an organization that lists the correlations between all possible pairs of variables. The sign of the value of the covariance is what tells us about the correlations between elements. If the covariance is positive, then the two variables are directly correlated. If the covariance is negative, the relationship between the two variables is an inverse correlation.

The next step in PCA is actually identifying the principal components by computing the eigenvectors and eigenvalues of the covariance matrix. However many principal components are produced from the dataset should be equal to the amount of dimensions in the set. Principal components are “combinations” or “mixtures” of the initial variables, and are constructed such that each of them are uncorrelated and as much information from the variability initial variables as possible is stored in the first component, and the succeeding components account for the remaining information, as shown in the example plot below for an 8-dimensional dataset:

This form helps significantly with dimensionality reduction because it eliminates components with little to no information while still retaining the information that describes the key relationships within the data. Consider the dataset below:

The direction first principal component line represents the direction of the highest variability in the data. Since the variability is the largest in the first component, the information captured by the first component is also the largest. It’s the line in which the projection of the points onto the line is the most spread out. This line maximizes the average of the squared distances from the projected points to the origin. The direction of the second principal component line should be orthogonal in order for the principal components to be completely uncorrelated.

We continue to calculate principal components n times, where n is the original number of values in the dataset.

Going back to eigenvectors and eigenvalues, here are a couple preliminary notes about eigenvectors and eigenvalues:

  • Every eigenvector has its own corresponding eigenvalue.
  • The number of eigenvectors and corresponding eigenvalues is equal to the number of dimensions/variables in the data.
  • For a tutorial on how to calculate the eigenvalues and eigenvectors of a matrix: https://www.scss.tcd.ie/~dahyotr/CS1BA1/SolutionEigen.pdf.

 

The eigenvectors of the covariance matrix give the directions of the principal component axes, and the eigenvalues are coefficients for the eigenvectors, and give the scalar amount of variance within each PC. The PCs in order of significance can be obtained by ranking the eigenvalues for each eigenvector from highest to lowest. To get the percentages of the variance carried by each PC, divide each of eigenvalue by the sum of all eigenvalues.

Next, we have to determine whether we want to keep some of the lesser components (the ones with low eigenvalues). We form a matrix called the feature vector with the eigenvectors of the components we do keep. This demonstrates the concept of dimensionality reduction since we are subtracting from the initial amount of principal components we had, which is equal to the dimension of the original dataset.

Lastly, we use our feature vector to restructure our dataset in a sense. We want to put our data in terms of the axes given by the principal components instead of the original axes. We can do this pretty easily by multiplying the transpose of the feature vector by the transpose of the original dataset.

 

Not everyone is perfect, including our friendly neighborhood video game developers, who sometimes don’t catch the programming errors in a game during test plays before it’s released to the public. Programming is hard, and these glitches can result in some seriously funny and downright inexplicable occurrences. Here are our favorite glitches from video games that were all due to programming errors:

1. Madden NFL 15’s “Tiny Man”

This glitch resulted in a Browns linebacker being turned into a Thumbelina version of himself, but with all the speed, strength, and agility of his original size. The game creators rolled with the glitch, though, and even kept the mini player model after fixing the problem.

2. Big Rigs

The game itself is notorious for glitches; one of which is when a big 18-wheeler zaps itself through a bridge, hugs a mountain, and then speeds up to an unmentionable speed before crashing the game entirely.

3. The Sims games

Earlier versions of the game were known to glitch out, from smoking body parts to vanishing furniture. One of the most recent glitches that sparked an array of giggles is the “monster baby,” in which a baby character is only a flesh-colored Picasso collage.

4. Pokémon Red / Blue / Yellow’s “MissingNo. Trick”

This “Pokémon” was used in a training video, to help teach you how to catch a Pokémon. After talking to the Old Man, who will so wonderfully teach you the catching ways, you’ll run into a wild “Missingno,” which is a nameless Pokemon glitch that can either duplicate items or delete your entire saved game — it was a 50-50 chance.

5. Super Smash Brothers Melee’s Master Hand

It took eight years of thinking that it was impossible to play as the infamous Master Hand, but then the glitch was found!

6. Pokémon Red / Blue / Yellow’s Mew Finding

That’s right, you can catch an actual Mew with this glitch, with no extra mods or special work on your part.

7. Tiger Woods PGA Tour ’08

There is a specific glitch in this game in which Woods walks on water to get a better shot. It got so much attention, that EA Games had to address it publicly.

There are many glitches in the video game world, because it’s very complex to make them and oversee every single aspect of it once it’s been released. Sometimes, but glitches are frustrating, but most times, they’re just fun. What other glitches have you experienced? Let us know in the comments!