MultiTag Photo Classifier with Deep Learning & PyTorch

By Terence Lee

Digital Pictures Tagging, Classification and Retrieval 

Using deep learning, transfer learning and PyTorch to machine learn, tag and classify a dataset of tens of thousands of family vacation photos to facilitate easy search and retrieval based on set parameter(s) such as scenery, building, flowers, sculpture, animal, nighttime, garden, person, church etc.

Prediction using multi-tagging dataset built on a CNN using Pytorch and deep learning

The Idea

I started this project when my family was scrolling through tens of thousands of photos to collect specific scenery photos to design and make a coffee table book. 

My goal was to train a neural net so that the entire library of family pictures could be identified, tagged and categorized. Once machine-tagged, all the relevant photos could be indexed and rendered. 


Build a CNN using Deep Learning/PyTorch

After further research, I decided on implementing a convolutional neural network (CNN) utilizing an industry standard deep learning library, PyTorch, which would feed predictions into a database that would be indexed and easily searchable via the elastic stack

MultiTag Photo Classifier Flowchart

Environment Setup

My initial setup was to build this project using tools such as Jupiter notebook for its ease of use and google Collab for its cloud processing and training capabilities. However, I ran into numerous kernel and module related issues when trying to run my code. 

After assigning a virtual environment in conda, installing the necessary dependencies, and assigning the Jupiter notebook kernel to the appropriate env, there seemed to be a dependency issue when running the training for the PyTorch model. 

It appeared that there was a bug with the Jupiter notebook. In the middle of the code block running the process crashed and returned a traceback call

  • Error msg here:BrokenPipeError: [Errno 32] Broken pipe

  • AttributeError: Can't get attribute 'NusDataset' on <module '__main__' (built-in)>

After various attempts to debug the environment, and in the interest of saving time, I opted to simply migrate the code back into a native python environment and file hierarchy.

I had originally set up the training model on a Nvidia Jetson Nano portable computer. The Jetson Nano Developer Kit was easy to configure and would have made the training and execution of the model much easier. Due to the arm processor used in the Jetson Nano, I ran into compatibility issues with my environment. For better performance, I decided to train the classification deep learning model using my Nvidia GTX Graphics Card to significantly reduce training time and eliminate any compatibility issues.


Class structure:

Three folders are set up:

  • The “input” folder houses all our raw data which are vacation photos. Inside, we have a train.csv file that has all the photo image names classified based on its respective attributes. 

  • The “output” folder contains our trained data from deep learning models along with their graphical loss plots from each iteration.

  • Finally, we have the “src” folder that contains thirteen Python scripts. We will cover each of these items through the rest of this article.

Dataset preparation

The original approach was to train a CNN from scratch using the entire picture library, but decided that implementing transfer learning on an existing model (in this case ResNet50) would be more time efficient for purposes of general classification. For the training/testing dataset, I selected and manually tagged a selection of 1,500 vacation images into 29 unique tags and stored said information into a csv file.

Train/Validate/Test

As a starting point, I adopted a generic code skeleton that was built for classifying the genres of movie posters through transfer learning. But various code modifications were made to tailor to the photo multi-tag project including:

  • alter the model dimensions so that my custom dataset could fit the final ResNet layer

  • add additional transforms such as image normalization

  • add code which converts the output model file into an onxx file

The code utilized PyTorch for training/tagging, OpenCV for image preprocessing, and Matplotlib for generating loss plots.The dataset was split into a ratio of 85-15 for training and testing respectively as generally higher training ratios yield more accurate models. 


Optimal number of epochs to train a neural network

The number of epochs to train a neural dataset will impact the accuracy of the training dataset. Too many epochs may adversely impact the training model to overfit the training data. What this means is that the model merely memorizes the data instead of learning it. With each iteration, as the number of epochs (weight changes) increase in the neural network, the accuracy of the data will go from “underfitting” to “optimal” to “overfitting”.

Data Training Chart - Loss vs Epochs

Through trial and error, I concluded that ~25 epochs was the optimal point where training and validation loss were minimized. In the above graph, both training and validation loss decrease as epochs increase. Though not pictured, once the model passes 20 epochs, both loss plots stagnate however the classification accuracy of the resulting model spikes significantly.

My research into the PyTorch library yielded that the activation function used, “BCELoss passed through a sigmoid function”, was less stable than an alternative but equivalent “BCEWithLogitsLoss function” which I decided to utilize. 

While training, I ran into an issue where the process would error out at random intervals due to a CUDA device-side assert. Through further research and debugging, I discovered the cause to be nan image tensors being fed into the model which results in a BCE activation function error; more specifically, a bug in the preprocessing stage of the code led to incorrect image file names leading to OpenCV opening invalid or non-existant files which resulted in the nan tensors.


Modifying class structure to finetune and randomize test results

During the classification stage, there were several severely underrepresented tags in the data leading to classification issues.To fix this bias, I added more diverse pictures to ensure that all tags in the family vacation photo library are well represented. 

The base code was designed so that the last 10 cells of the csv file were always utilized for classification once the model was trained. This implementation had very rigid use cases so I modified the class structure and altered the base inference.py script. 

The new dataset class (dataset2.py) marks an entire csv for classification rather than just the last x images which is helpful for classifying large volumes of images for the elastic database. Alternatively, for better testing and visualization purposes, the shuffle.py script randomizes all the image cells so that the images classified aren’t always the same x images. The model was then saved and exported into a pth file so I could move onto the web-app section.

Modified class structure to randomize and finetune dataset

The Result - Predicted versus Actual

Predictions were made based on set parameter(s) such as scenery, building, flowers, sculpture, animal, nighttime, garden, fountain, church etc. as defined in the properly trained deep learning model. Actual tags assignment can be found here.

Deploying Deep Learning Multi Tag Model via Flask on Heroku

My research first led me to consider options such as ONNX and TensorRT as they seemed to be both versatile and powerful tools. Unfortunately, I ran into issues at each step of the process with incompatibility and configuration issues. As such, I pivoted into merely taking the pth file and building a web app based on importing the model file directly into the cpu version of PyTorch (which is inherently much less resource intensive.) 

Pivoting to Flask

Several articles pointed me to Flask, a commonly used tool for directly interfacing with PyTorch and translating model outputs into a clean and intuitive UI. After studying the Flask documentation, I implemented a basic flask app that was intended to test its capabilities. 

The app simply returned a json file containing the model’s predictions when prompted with a http request with the input image. Since the app was functional, I continued looking into flask as a web app and discovered two different UI approaches. 

The first involved prompting a user to upload an image upon which the app outputs the image uploaded as well as the predicted tags; simple enough. A live demo of my implementation can be found here.  

The second involved the user inputting a website link that contained images which the app would scrape and classify outputting the predicted tags and a tabulated list of the most frequent tags that came up. Since both options provided a different useful functionality, I analyzed both skeleton codes and restructured them to fit my model and hosted them on heroku so they could be demoed for anyone interested in the project. 

The upload web app code relied on a pre-trained imageNet model for classifying the uploaded image; since I was trying to run classifications on my transfer learning model, I had to rewrite the get_prediction function called in app.py completely. 

This involved importing the transform_image function from the model training code, loading the PyTorch pth model into memory, and condensing the inferences.py file into a new get_prediction function that properly takes the input image and classifies it utilizing the vacation dataset classifier I trained. 

Before rewrite:

After rewrite:

Image Scraper Code

For the image scraper code, the changes I made were fairly similar. It utilized a pre-trained DenseNet121 model for classification which I had to replace with my own model. This involved rewriting the get_prediction function as well as changing the arguments to remove any dependency on the pretrained model. Like with the other code skeleton, I ported the image transformation and classification functionality from the model training code into the three existing functions (transform_image, get_category, get_prediction). I also adjusted the function output so that the top x predicted tags would be output rather than just one tag.

Image scraper code before:

Image scraper code After:

Elastic Stack:

Next step is to import the entire multi-tag database of vacation photos into an elastic database. I currently have a script that has classified all the images and stored the results into a json file. This json file is then formatted properly for the elastic search database before being uploaded using an appropriate API key. 

At the same time, I am working to continue to improve the model accuracy. My plans are to eventually use this database in a web app where users can search for pictures via specific single or multiple tag(s) and the app will pull and display all matching results.


Back to the Future Classroom: VR/AR/AI Transformation

By Alice Liu

Image Credit: Pixabay

Image Credit: Pixabay

Virtual reality (VR), Augmented Reality (AR) and Artificial Intelligence (AI) has the potential to reinvent education in the future classroom from a culture of teaching to “learning through experiences”. From virtual science labs to virtual escape rooms to machine learning robots, VR, AR, and AI can reshape the future of education. Throughout the years, these three technological tools have collaborated to reach major milestones in many aspects of life: specifically education. VR, AR, and AI are super versatile and serve as valuable resources for the classroom. The new breakthrough in AR/VR technology will prove to be a game-changer in future education. Harnessing these tools can empower and boost students’ learning and offers promising opportunities to transform education to virtually anytime and anywhere in or outside of the physical classroom

Learning By Doing - The Best Way to Learn

There is a recent upsurge with using all kinds of technology in the classroom, which has become a norm even for kindergarteners -- something we didn’t see very often a decade ago. The future of learning will be a blended learning ecosystem, infused with technology in a classroom (virtual or physical) made possible by AI and immersive technologies such as VR and AR. Hands-on learning is found to be more effective at information retention when students can touch, interact, and experience rather than just being lectured about the topics. It’s these hyper-immersive learning technologies powered by VR/AR/AI that will enrich and open up a world of cross-cultural learning opportunities, from walking the streets of a foreign country through the Google Earth VR App, to meeting Mona Lisa beyond the glass

AI Meets VR and AR

By working together, VR, AR, and AI can produce content for whole curriculums and lesson plans -- the AI aspect would account for the technical, nuts and bolts while the VR and AR aspect would formulate the virtual experiences and simulations. These tools, along with augmented reality (AR), can also be very engaging for students. As opposed to typical classroom lectures, the world of VR and AI artificial simulations offer countless opportunities for students to break free of the traditional classroom setting and enter a realistic, virtual experience. This both prepares students by giving them close-to-real-life experiences but also allows for them to enjoy learning about what they’re doing because they get to experience what it would be like in real life. 

CoSpaces Edu, a VR/AR tool, not only empowers teachers to develop class curriculum and assignments but also allows students to gain hands-on coding experience with designing virtual 3D worlds. For a more physically-hands-on experience, Merge Cube’s spatial computing technology allows students to interact with 3D digital content (ancient artifacts, plant cells, sculptures, and more) uploaded onto a Merge Cube usable anytime, anywhere. Both teachers and students can create STEM content, develop applications, and bring them to life. Another application that uses VR, AR, and AI is Google Sky Map. It is a hand-held planetarium that allows users to point a phone to the sky and discover and track stars, planets, nebulae, and more. 

According to an article by Susan Fourtané on the website Interesting Engineering, researchers have found that using these kinds of technologies in the classroom have many benefits on learners including increased content understanding, long-term memory retention, improved collaboration, increased motivation, and many more.

From STEM to Humanities - Distance Learning Up Close

Image Credit Pixabay

Image Credit Pixabay

VR/AR enables students to engage through a headset and a teacher can teleport into a virtual classroom providing guidance in a simulated immersive environment. According to ELearning Inside, the cost of VR headsets powered by mobile phones and VR compatible computers have been declining -- offering a more affordable adoption of VR technology in the classroom. The simulations provided by VR, AR and AI make it easier and more accessible to engage in hands-on learning without the hassle of expensive costs or getting resources. For example, at Hamilton College, these technological tools are paving the way for the future of teaching human anatomy. Students can learn about the body via VR which allows for dissections to be performed without a cadaver. This makes it much cheaper to learn necessary procedures. On top of that, virtual tools like this allow for easier accessibility for classrooms and students lacking necessary tools or experience to build these skills. 

At the University of California at San Francisco (UCSF) Medical students utilize VR to simulate real-life surgery, and learn about the human body by zooming in layer by layer and deconstructing and reconstructing muscles, organs, and bones. This also allows them to reverse back to the skin level and reset all processes and learn a different way. Using VR to simulate medical procedures not only enables repetitive practices helpful for building real-life surgery skills but also “takes anatomy learning and applies it almost immediately

VR, AR, and AI also provide a convenient way for people who want to learn foreign languages anywhere and anytime. Some common tools for this include Duolingo and Rosetta Stone, which utilizes AI, where users can practice speaking languages to a device on top of learning to read and write in those languages. Similarly, MondlyAR, the world’s first AR language learning experience, combines aspects of AR and AI to provide an immersive and interactive experience for learning new languages. MondlyAR’s virtual learning assistant offers a lifelike conversational partner in the form of AR who can help users practice and learn languages by simulating real-world situations using interactive objects in AR as well as AI to tailor a personal learning experience. Such Virtual solutions allow students to immerse themselves into a VR-rendered foreign country with a conversation scenario by utilizing AI-powered chatbots, speech recognition, or animated characters without the expense of traveling. By practicing their language skills using VR, AR, and AI, students will likely feel less intimidated by learning a new language and can improve their language skills while receiving real-time feedback. 

Overall, VR, AR, and AI are all useful and important tools for the physical or virtual classroom as they can provide a more immersive and interactive experience -- no matter what subject is being taught -- that can engage and educate students more efficiently as if they were in a real world situation.

Machine Learning Algorithms

By emulating the human brain to teach and learn, AI tools can be applied in scientific research and medical diagnoses by developing a neural network to give computers a “vision” helpful for detecting cancer or polyp recognition. Using natural language processing, AI teaching assistants can be helpful in answering students’ questions about assessments or deadlines and other frequently asked questions. Colleges are leveraging AI tools such as Amazon’s voice-enable assistant Alexa in dorm rooms outside of the classroom to enhance campus life and promote student engagement. Furthermore, Alexa will be tailored to answer specific questions about the college and campus in order to create a smarter classroom.  

AI is a key tool in the classroom and can be used to generate personalized coursework and resources to aid a student in learning as well as automate lesson plans and test/quiz grading. By analyzing and gathering students’ data and interactions with digital learning systems, AI can provide personalized learning experiences and be able to yield the best ways and methods to help students succeed. With data collection, there are always privacy concerns and potential bias. Advocates believe that AI can improve student outcomes and allow instructors more time to interact and engage with students while others believe it may result in formulaic teaching. Nevertheless, AI technologies can be useful both in and out of the classroom for various tasks.

Fighting Inequity in Education

VR, AR, and AI are promising technologies aimed at transforming our learning environments to narrow the digital divide. For example, students who are unable to or cannot afford to travel far distances for a field trip or learning experience are able to do so with VR and even AR. Likewise, with VR & AR immersive and engaging environments now readily accessible, students with learning or physical challenges are able to participate fully in activities and exercises, adapted and tailored to their needs with AI. 

Google Cardboards. Image Credit: Pixabay

Google Cardboards. Image Credit: Pixabay

One thing that may be an issue for equity and equality however, is the cost of it all. Is the rise in AI/VR/AR technologies creating another potential for a digital divide? Buying VR headsets and AR and AI applications or compatible computers can be very costly -- especially for whole schools. Fortunately, mobile headsets paired with smartphones, such as Google’s Daydream View offer a basic VR experience with a hand controller at a reasonable price. Alternatives are the lower-cost Google Cardboards or Google Earth, which is free and accessible. One way to widen the adoption and access to VR/AR apps is to crowdsource VR experiences or experiments and make it available to all students through public hubs such as libraries or technology labs. 

 The Rise of VR/AR/AI Amidst COVID-19

With the global pandemic and shelter in place orders, the demand for and usage of technologies like VR, AR, and AI have risen. At the University of North Carolina at Chapel Hill, Steven King, an associate professor at the Hussman School of Journalism and Media, used VR to enhance virtual and remote learning for his students. King sent out Oculus Go VR headsets to his students and began to interact with his students who built their own avatars such as robots, pandas, ducks and other characters in a 3D version virtual lab class. King allowed students to roam around, collaborate with team-mates in different tables, write on virtual whiteboards, and develop problem-solving skills by navigating the new virtual environment. Beyond learning, King said that when an event or activity happens in a physical place or around a particular object, students remember it better. He built the virtual setting to include tables with a tennis ball, basketball or soccer ball overhead to help his students with memory retention. These tools and resources have proven to be very useful and engaging in the classroom -- especially for remote learning.

Beyond COVID-19 – The New Normal

Many educators are struggling to come up with strategies to return to school safely as children need classroom interaction to build positive social, emotional, and cognitive skills. However, with the recent surge in COVID-19 cases, school districts are opting for 100% virtual classes through video conferencing platforms to keep students, teachers, and school staff members safe. Nevertheless, it is challenging to replicate classroom interactions through remote digital learning. For now, VR/AR/AI presents an opportunity to engage students in virtual learn-by-doing activities at the safety of their own homes. While VR/AR/AI headset, sensors, cameras, and glasses may be costly to purchase for home use, in the not too distant future, VR/AR/AI tools may become ubiquitous and accessible just like smartphones, smart devices, and computers. Experts say that in order to transform our learning into an immersive and smart classroom, we need 5G technology to interconnect our smart devices with increased speed and reliability. Sheltering in place due to COVID-19 has allowed us to embrace a new normal with innovative technology, from cost-effective VR/AR/AI immersive classroom tools to personalized learning and collaborations between humans and AI all powered by 5G. 


Works Cited

“MondlyAR - World's First Augmented Reality Language Learning App.” Learn Languages Online for Free with Mondly, www.mondly.com/ar.

Ayers, Ryan. “Will There Be a Boom In Augmented And VR Use Post-COVID?” eLearning Industry, 2 Aug. 2020, https://elearningindustry.com/vr-ar-technology-use-post-covid.

Baker, Mitzi. “How VR is Revolutionizing the Way Future Doctors are Learning About Our Bodies.” ucsf.edu, 19 Sept. 2017, https://www.ucsf.edu/news/2017/09/408301/how-vr-revolutionizing-way-future-doctors-are-learning-about-our-bodies.

Fourtané, Susan. “Augmented Reality: The Future of Education.” Interesting Engineering, Interesting Engineering, 22 Apr. 2019, interestingengineering.com/augmented-reality-the-future-of-education.

Mathawan, Rohan, et al. “What The Future Of AI and VR Has In Store For The World Of Education.” TechStory, 31 Jan. 2020, techstory.in/what-the-future-of-ai-and-vr-has-in-store-for-the-world-of-education/.

Murphy, Kate. “UNC Students Are Learning in Professor's New Virtual Reality Classroom during Pandemic.” Newsobserver, Raleigh News & Observer, 1 Apr. 2020, www.newsobserver.com/news/local/education/article241677001.html.

Schwartz, Natalie. “How Artificial Intelligence and Virtual Reality Are Changing Higher Ed Instruction.” Education Dive, 2 Nov. 2018, www.educationdive.com/news/how-artificial-intelligence-and-virtual-reality-are-changing-higher-ed-inst/541247/.

Shenoy, Rajiv. “VR, AR and AI will Transform Universities. Here’s How.” Unbound, https://unbound.upcea.edu/online-2/online-education/vr-ar-and-ai-will-transform-universities-heres-how/.

Walker, Sherri. “3 Futuristic Technologies to Support Blended Learning: Artificial Intelligence, Virtual Reality, and Augmented Reality.” Imagine Learning, 9 Oct. 2018, www.imaginelearning.com/blog/2018/10/3-futuristic-technologies-support-blended-learning-artificial-intelligence-virtual.


EqOpTech Inc., located in Los Altos, CA, is a 501(c)(3) IRS-designated tax exempt nonprofit organization that promotes and enables equal opportunity free access to technology for computer learning and STEM education in under-served communities. Visit EqOpTech at www.EqOpTech.org

The Equal Opportunity Technology program is made possible thanks to the Los Altos Community Foundation community grant award. Visit here for more information.

Fighting The COVID Battle With Data

By: Anika Nambisan

Image Credit: Pixabay

Image Credit: Pixabay

As the world battles against COVID-19 and countries are reopening the economy in phases, we need to focus on data science to balance public versus economic health. In the U.S., various state governors lay out different roadmaps to restarting the economy safely. According to medical experts, it will entail at least 20 million COVID-19 tests a day, robust contact tracing and quarantine measures. Ultimately, the development of a vaccine, antibody testing and other therapeutics will be the key to flattening the curve and maybe even ending the pandemic. In the meantime, data scientists have built data analytics and prediction models to help forecast new coronavirus hot spots, hospital capacity, ICU beds, ventilators and PPE (Personal Protective Equipment). With better data, state governments can determine where to send these resources, which counties are safe to start opening and when to return to restriction in the event of  a surge in virus cases.

Computer Algorithm Sounded The Alarm

60 Minutes reported that on “New Year's Eve, a small company in Canada, BlueDot was among the first to raise the alarm about an infectious disease outbreak” using its computer algorithm. BlueDot’s algorithm powered by artificial intelligence was crunching through tons of data including medical, livestock reports, cell phone data, ticket data from 4,000 airports to predict where the virus will spread next. California Governor, Gavin Newsom, in his daily briefing made no secret that he believes in outbreak science to forecast in real time “on a daily basis, hourly basis, moment-by-moment basis if necessary, whether or not our stay-at-home orders were working. We can truly track now by census tract, not just by county.” “Data became California’s all seeing crystal ball,” as it leveraged the help of BlueDot, Esri, Facebook and others, using mapping technologies, cell phone data to predict the next hotspots and develop risk heat maps. California’s early action to mandate shelter-in-place may have saved 1,600 lives in the first month according to researchers.

Crowdsourced Symptom Data Predicts Next COVID Hotspots

While gathering information from each city in the entire nation may seem onerous, Facebook and Google have partnered with Carnegie Mellon University (CMU) to create a COVID-19 Symptom Map. There are roughly 2 billion Facebook users worldwide. Basic surveys created by researchers at CMU on coronavirus symptoms are being pushed out to Facebook’s users to voluntarily participate. To protect users’ privacy, Facebook does not share the results; in fact, participants leave Facebook's website to take the survey.  To ensure anonymity, a random ID number along with a statistical weight value to correct for any sample bias  are assigned to measure participation in different communities. The COVID-19 symptoms data collected by CMU will be aggregated to help predict potential coronavirus spread by county and hospital region. According to Tibshirani, co-leader of CMU’s Delphi Research Group, “This data has the potential to be extremely valuable for forecasts, because a spike in symptomatic infections might be indicative of a spike in hospitalizations to come." 

Facebook has also partnered with University of Maryland to expand its survey globally and the CMU research team to develop an application programming interface so that any researchers can access the data anytime, anywhere to make informed decisions to combat COVID-19. Google has also joined forces with CMU’s research efforts by partnering with CMU to collect a one-question survey for COVID-19 through its Opinion Rewards and AdMob apps.  

Turning Data into Insights

Early symptom detection can be a sign of whether or not the curve is flattening, as to where the outbreak may be spreading, and when the next wave of the virus may hit. The results from assays and online surveys can be used to predict where the virus will spread, and provide valuable insights on medical resources allocation. 

Due to the limited COVID-19 testing kits, CDC and tech companies are partnering to launch self-screening tools. Apple has released an app that offers free COVID-19 screening in terms of symptom analysis, age, travel history, pre existing conditions to determine appropriate next steps: self isolate, eligible for testing, or call 911 for emergency medical services. Verily, an Alphabet company, has launched a “Project Baseline” online screener to triage people who may have a high risk of exposure for testing based on public health guidelines. Project Baseline is an initiative that engages people and scientists to uncover more medical insights and develop new health products and services. According to Project Baseline COVID-19 privacy policy, the personal health data collected will be used for a variety of purposes, including commercial product research and development. This raises privacy concerns: one medical researcher is questioning if the Google COVID-19 site is a “data mining operation”.

Studying the Spread of COVID-19 - Mobility Data

In an effort to fight COVID-19 spread, Facebook and Google track our GPS location data and share it with public health researchers and local governments to help make informed decisions on social distancing measures and travel policy. Scientists believe that social distancing can be effective in mitigating the COVID spread. In response to the COVID-19 pandemic, Facebook’s Data For Good COVID-19 program provides a number of tools such as Disease Prevention Maps, Mobility Datasets and Social Connectedness Index to help health researchers and policymakers address virus outbreak. As the virus spreads mainly from person to person, the public has resorted to physical distancing to slow down the rate of transmission in the absence of a vaccine and effective therapeutics treatment.  

Facebook’s Data For Good mobility tools are helping public health professionals track and monitor to see if social distancing is being practiced. For example, the data observed low mobility in the San Francisco bay area and high mobility in places like Riverside and San Bernardino. This may be attributed to the Silicon Valley tech industry that allows tech workers to work from home remotely. In contrast, San Bernardino and Riverside have a different socioeconomic demographics with a predominately blue-collar workforce where remote working and shelter in place is not possible.

Likewise, Google’s free mobility tracking tool provides insight on how the public move around the community due to COVID-19, to do less visits to groceries, pharmacy, and retail and more park visits. Google believes that the mobility reports could help shape “recommendations on business hours or inform delivery service offerings, or add additional buses or trains in order to allow people who need to travel room to spread out for social distancing.”

Facebook’s Social Connectedness Index “shows friendships across states and countries, which can help epidemiologists forecast the likelihood of disease spread, as well as where areas hardest hit by COVID-19 might seek support.” Similarly, researchers in Facebook utilize colocation maps to “...reveal the probability that people in one area will come in contact with people in another, helping illuminate where COVID-19 cases may appear next.” This will help scientists track people from an area with a large outbreak and are likely to come in contact with a person from an area with less cases.

With the recent protests against police brutality, the COVID-19 Mobility Data Network may shed some light on whether such gatherings will cause a spike in COVID-19 cases. However, this remains to be seen and the verdict is still out on whether there is no correlation between the protests and the spread of the virus, or protest gatherings were responsible for a spike in coronavirus cases in the wake of Memorial Holiday.

Using Artificial Intelligence To Help Health Experts

Facebook AI has partnered with academic experts to improve COVID-19 forecasting tools for resource planning and allocation for health care providers and emergency responders. For example, using publicly available data and applying Multivariate Hawkes Processes, Facebook AI researchers can create daily COVID-19 forecasting models for the state of New Jersey that will help New York University “...leverage this information in their models to estimate how progression of the disease will affect hospitals, bed and ICU capacity, and local demand for ventilators, masks, and other PPE needs at a hospital and county level.” 

In addition, Facebook AI is “also collaborating with NYU Langone Health’s Predictive Analytics Unit and Department of Radiology to build hospital-specific forecasts for COVID-19, using reinforcement learning, causal modeling, and supervised/self-supervised learning techniques.” Using machine learning to learn from patients’ data such as de-identified X-rays and CT scans, will help health experts better “...predict the number of patients whose condition is likely to improve or worsen in a given time period; how many people are likely to be admitted, transferred to ICUs, or discharged; and the number of ventilators, types of tests, and treatments that might be needed.”

COVID-19 and Income Inequality

Using Facebook’s near-real-time mobility tracking data, researchers in Italy are observing the lockdown measures and how they may be correlated to income inequality. The study found that “lockdown measures had the biggest impact on people's mobility in towns with higher financial performance… but also in municipalities with high income inequality and low income per capita, suggesting that the lockdown might exacerbate poverty and income inequality in the absence of targeted fiscal interventions.” 

REAL-TIME Data is King, But Not All Data Is Created Equal

With the advent of data analytics comes insight and smart decisions. Real-time big data is the new gold in fighting the COVID battle. However, not all COVID-19 data is created equal; misinformation also spreads as fast as the virus itself. According to CMU researchers, “Nearly half of the Twitter accounts spreading messages on the social media platform about the coronavirus pandemic are likely bots.” While researchers are using machine learning and artificial intelligence to do contact tracing or to find a cure, everyone of us can do our part in fighting COVID-19. We can help bend the COVID curve by taking socially responsible measures including physical distancing, mask-wearing, hand washing, contributing to the COVID Symptom Tracker and embracing outbreak science.  But most importantly, we need to stay healthy and keep others healthy.  We are all in this battle together.


Bibliography

Kaplan, Claire. “COVID-19 Lockdowns May Impact Economic Inequalities.” Scimex, 19 June 2020, www.scimex.org/newsfeed/covid-19-lockdowns-and-economic-segregation.

Salzman, Sony. “What Testing Data Reveals about Possible Coronavirus Spike in LA.” ABC News, ABC News Network, 2020, abcnews.go.com/Health/testing-data-reveals-coronavirus-spike-la/story?id=71099034.

Mazziotta, Julie. “Black Lives Matter Protests Do Not Appear to Have Caused a Spike in Coronavirus Cases.” PEOPLE.com, 22 June 2020, people.com/health/black-lives-matter-protests-no-spike-in-coronavirus-cases/.

“19 Mobility Data Network.” COVID, 13 Apr. 2020, www.covid19mobility.org/.

Fitzpatrick, Jen. “Helping Public Health Officials Combat COVID-19.” Google, Google, 3 Apr. 2020, www.blog.google/technology/health/covid-19-community-mobility-reports/.

“COVID-19 Community Mobility Report.” Google, Google, www.google.com/covid19/mobility/.

“Social Connectedness Index.” Facebook Data for Good, 2020, dataforgood.fb.com/tools/social-connectedness-index/.

“Facebook Data for Good Mobility Dashboard: COVID-19 Mobility Data Network.” COVID, 14 Apr. 2020, visualization.covid19mobility.org/?date=2020-07-01.

“Disease Prevention Maps.” Facebook Data for Good, 2020, dataforgood.fb.com/tools/disease-prevention-maps/.

“Our Work on COVID-19.” Facebook Data for Good, 2020, dataforgood.fb.com/docs/covid19/.

O'Flaherty, Kate. “Google's COVID-19 Testing Website: A Danger To Your Privacy?” Forbes, Forbes Magazine, 19 Mar. 2020, www.forbes.com/sites/kateoflahertyuk/2020/03/19/googles-covid-19-testing-website-a-threat-to-your-privacy/.

“Privacy Policy: Project Baseline.” Privacy Policy | Project Baseline, 2020, www.projectbaseline.com/privacy/.

“COVID-19: Project Baseline.” COVID-19 | Project Baseline, 2020, www.projectbaseline.com/study/covid-19/.

“COVID-19.” Apple, 2020, www.apple.com/covid19/.

“Symptoms of Coronavirus.” Centers for Disease Control and Prevention, Centers for Disease Control and Prevention, 13 May 2020, www.cdc.gov/coronavirus/2019-ncov/symptoms-testing/symptoms.html.

“Epi Forecasting.” DELPHI, 2020, delphi.cmu.edu/.

“19 Interactive Map & Dashboard.” COVID, 2020, covid-survey.dataforgood.fb.com/?date=2020-06-20.

Friedson, Andrew, et al. “California's Early Shelter-in-Place Order May Have Saved 1,600 Lives in One Month.” The Conversation, 23 June 2020, theconversation.com/californias-early-shelter-in-place-order-may-have-saved-1-600-lives-in-one-month-137978.

California, State of. California Governor, www.gov.ca.gov/.

“Outbreak Risk Software.” BlueDot, 24 June 2020, bluedot.global/.

“The Computer Algorithm That Was among the First to Detect the Coronavirus Outbreak.” CBS News, CBS Interactive, 2020, www.cbsnews.com/news/coronavirus-outbreak-computer-algorithm-artificial-intelligence/.

Jee, Charlotte. “The US Needs to Do 20 Million Tests a Day to Reopen Safely, According to a New Plan.” MIT Technology Review, MIT Technology Review, 20 Apr. 2020, www.technologyreview.com/2020/04/20/1000228/the-us-will-need-to-do-20-million-tests-a-day-to-reopen-safely/.

“Using AI to Help Health Experts Address the COVID-19 Pandemic.” Facebook AI, 2020, ai.facebook.com/blog/using-ai-to-help-health-experts-address-the-covid-19-pandemic.

Allyn, Bobby. “Researchers: Nearly Half Of Accounts Tweeting About Coronavirus Are Likely Bots.” NPR, NPR, 21 May 2020, www.npr.org/sections/coronavirus-live-updates/2020/05/20/859814085/researchers-nearly-half-of-accounts-tweeting-about-coronavirus-are-likely-bots.


EqOpTech Inc., located in Los Altos, CA, is a 501(c)(3) IRS-designated tax exempt nonprofit organization that promotes and enables equal opportunity free access to technology for computer learning and STEM education in under-served communities. Visit EqOpTech at www.EqOpTech.org

The Equal Opportunity Technology program is made possible thanks to the Los Altos Community Foundation community grant award. Visit here for more information.

Full Circle: Turning E-waste to E-resources

By Sarah Yung

Nature sustains a delicate balance between all living things in a cycle of creation and destruction known as the circle of life.   As Mufasa from The Lion King says, “When we die, our bodies become the grass, and the antelope eat the grass. And so we are all connected in the great Circle of Life."  Similarly, if we can extract, recycle, and reuse raw materials to add value to new or existing products, then we can transform our manufacturing process into a sustainable and renewable life cycle.

To sustain a circle of life, we need to create a circular economy. The circular economy rests on the idea that waste can be transformed into valuable resources for another purpose.  Everything in a circular economy is produced from resources that are repurposed or regenerated from existing materials or components parts when products approach the end of life.

Just like the Circle of Life, a circular economy connects supply chains, markets, vendors, and consumers.  Under this business model, all stakeholders contribute to the circular economy by exerting influence to reduce, recycle and reuse natural resources.

From E-Trash to E-Treasure

Electronic waste, also known as e-waste, is a significant issue in today’s world.  We generate 50 million tonnes of e-waste annually, expected to double by 2050.  However, only 15-20% of this waste is collected and properly recycled.  Undocumented waste is sent to be incinerated, traded illegally, or processed with substandard methods.  Most e-waste accumulates in landfills, where it not only takes up space but also poses a hazard to others.  

3 Rs circularity.jpg

Although e-waste accounts for only 2% of landfill trash, it is 70% of all hazardous waste.  It’s commonly known that plastic - which accounts for about 20% of e-waste - takes a long time to naturally decompose.  But there are also a number of dangers associated with the toxic chemicals that can be found in e-waste.  Those who process electronic waste are exposed to noxious fumes which are hazardous to one’s health.  Heavy metals like lead, mercury, and cadmium can contaminate land, water, and air.  Once chemicals filter into our water supply and the food chain, humans are at risk of consuming dangerous chemicals.

Refurbishing is a promising method to solve the pressing e-waste issue by transforming e-waste into “e-treasure.”  In refurbishing, manufacturers strip old products for components still in working condition and reuse these components in new products.  Unlike in recycling, working components can be reused almost indefinitely.  Refurbishing will significantly lower our carbon footprint by obviating the need to remanufacture components in new tech.  Refurbishing is one of the building blocks for a circular economy, and it has both environmental and economic benefits when it comes to manufacturing electronics.

Manufacturers

Transforming from a linear “make, buy, dispose” production model to a circular “reduce, reuse, recycle” economy can create promising future opportunities.  Increasingly, investors value companies beyond their short-term profits, also considering their sustainability performance. Manufacturers who are socially responsible will align its social, environmental, governance strategy to optimize its profitability goals.  The bottom line: companies will be assessed and rewarded by investors based on how well they manage their profits, planet and people programs to create long-term value for society as a whole.

Since 2014, Dell has used over 21 million pounds of closed-loop plastics in its products, making over 125 different products with the reused components.  Dell uses 30% of the plastic it collects for its own devices, then sends the rest to downstream recyclers to be used in other appliances.  In 2018, they started using recycled gold, even starting their own jewelry line made of gold from refurbished electronics.

A decade ago, Best Buy launched a parking lot recycle/renew program to take back and recycle 2 billion pounds to-date of used electronics including old PCs, cables, TV and other electronics.  A  Best Buy/Apple partnership opened up their stores to offer Apple-certified product repair services with the idea of fixing, refurbishing, and reusing the devices, instead of throwing them away.

One obstacle to refurbishing is that modern electronics are not easy to recycle or break down.  Because tech companies closely guard their designs, it’s difficult for recycling facilities to know how to best process the devices.  There is a push for manufacturers to make their devices more easily recyclable and reparable through a number of new regulations.  Regulations include ensuring spare parts are available for a certain number of years, making appliances easy to disassemble, and allowing repair professionals to access technical information.

Manufacturers can also eliminate planned obsolescence.  Planned obsolescence is a policy of creating products that rapidly become obsolete, forcing the consumer to continuously purchase newer, upgraded products.   In the long run, this practice may lead to increases in e-waste that even refurbishing can’t keep up with.  By minimizing the generation of electronic waste, we can minimize the energy needed to maintain a circular economy.

Consumers

As consumers, we must be responsible for our own consumption, limiting our purchases to what is functional.  Just as manufacturers can eliminate planned obsolescence, consumers can continue to use their devices for longer, instead of buying new devices as soon as they are released.  Holding onto usable products to reduce their consumption making refurbishing a feasible process in the future.

Likewise, consumers can also make a big difference by recycling unusable electronics, instead of holding onto them or trashing them.  Currently, while big appliances have a nearly 80% recycle rate, barely 20% of small appliances make it to the recycling center.  Big appliances can be picked up by city services, while small appliances are disposed of at a city-designated drop-off site, far more inconvenient for the average person.

In Silicon Valley, environmentally conscious students are making a difference in our treatment of electronics.  For example, Los Alto High Green Team hosts an electronic waste drive every year to prevent consumers from improperly disposing of their e-waste.  This organization, EqOpTech, engages student volunteers to refurbish old computers so they can be used again, instead of dumped in a landfill.

Others are also making improvements in how they deal with electronic waste: directing e-waste to recycling centres instead of the landfill and developing processes to handle different types of e-waste.  Around the world, the London-based social enterprise Restart Project empowers people to “fix our relationship with electronics.”  People with broken consumer electronic equipment can come to events organized by Restart Project, and volunteers will mend their electronics for free. 

To live a sustainable life, we all need to change our mindset to recycle and reuse, instead of just throw away.

Internet of Things & Cloud Computing

Harnessing the potential of technology innovation not only helps us live a sustainable life but also benefits the planet.  The era of cloud computing and the Internet of Things (IoT) gives us more power than ever to optimize and extend the life of resources.  Cloud computing enables companies to share software and network resources via the cloud.  Pay-as-you-go subscriptions allow companies to scale based on their infrastructure needs, minimizing initial investment, operating, and maintenance costs. Cloud computing focuses on pooling resources, to “share and rent” rather than to “own” - maintaining our resources by scaling up and down, instead of building and discarding resources.

The sustainable and flexible nature of cloud computing is complementary to an IoT circular model where smart, wireless sensors can collect data on equipment and report when it needs to be serviced - a process known as predictive maintenance.  Rather than replacing specific parts at scheduled intervals (preventive maintenance), predictive maintenance allows us to use functional parts for longer, and saves us the hassle of waiting for repairs when a part breaks earlier than expected.  When a part is replaced, it is feasible to salvage useful materials that can be reused in the regular supply chain.  From manufacturing processes to energy and city infrastructure, billions of global smart IoT devices continue to extend product lifespan and optimize profitability.  For example, Barcelona uses smart meters to monitor and optimize the use of electricity.  Their system of rain and humidity sensors regulates irrigation of public parks, yielding cost savings of 25%.

Another promising technological development in our transition to a circular economy is image recognition via artificial intelligence.  In order to properly reuse and recycle different types of materials, they must be properly sorted.  For example, biowaste can be used to produce energy or fertilizers, plastics can be repurposed for the construction of roads, and waste can be burned (in proper conditions) to generate energy.  Electronic components, as we’ve been discussing in this article, can also be reused in new devices.  Using a combination of image recognition and wireless sensors, we can improve upon smart recycling and waste management.  Printing RFID tags on common waste allows sensors to sort and send waste to appropriate methods of disposal.  Image recognition can supplement this process through a quick visual assessment of incoming waste.

Government

The government has much incentive to enact legislation to forward a circular economy.  A circular economy will create more jobs and retain value in the electronics industry.  Refurbishing will also be necessary to meet the rising consumer demand for affordable electronic devices - it reduces the cost of materials and could help protect companies against the volatility of global markets. 

Our government must take responsibility for electronic waste.  The U.S., one of the world’s biggest producers of e-waste, still hasn’t ratified the Basel Convention - a multilateral environmental agreement which would restrict exports of e-waste, particularly harmful to developing countries.  Aside from the Basel Convention, the U.S. also has no national law for managing e-waste, leaving it up to the states.  On the other hand, the EU has some of the toughest enforcement in the world, and a higher recycling rate for electronic waste, showing how governments can impact our attitude towards e-waste.

The government plays a crucial role in switching from a waste management hierarchy to a circular economy hierarchy.  Rather than managing waste (by containing it in a landfill), the government should maximize the utilization of our resources (by extending product lifespan and recovering valuable materials).  Our government can promote a circular economy via updating climate plans and reforming recycling regulations.  They can also provide incentives to companies that extend their products’ lives or use recovered materials by reducing taxes or subsidizing production. 

Life Cycle Sustainability = Healthy Profit, People & Planet 

A circular economy has great economic and environmental benefits, reshaping our economy will require a massive paradigm shift, tearing down existing infrastructure and adding a new dimension to production.  

It is every stakeholder’s responsibility to create and live a circular economy for a self-sustained life.  The consumer’s mindset and behavior drives demand, which prompts producers to change their manufacturing processes and products.  Innovation, government regulations, and recycling pathways will all help drive initiatives towards circularity.  The road to a circular economy is not easy, but has the potential to foster a lifetime of opportunity and prosperity.  It is up to the circle of stakeholders to remake our economy into that interconnecting and sustainable world.

We need everyone to join the circular economy to enable the circle of life.  EqOpTech is one of many organizations working towards this great cause - helping with the notion of “reduce, reuse, and recycle” by refurbishing old tech.  You can make a difference!  The computers we refurbish go to underprivileged students in our community, so they can access valuable online educational enrichment resources.  Check us out at www.eqoptech.org and join us in this effort!


Works Cited

“AI for Waste Management.” Microsoft - Make Your Wish, https://www.microsoft.com/mea/make-your-wish/wish-details?id=322. Accessed 14 Jan. 2020.

The circle of life: A look at the circular economy | enablon. (2019, July 9). Enablon®. https://enablon.com/blog/the-circle-of-life-a-look-at-the-circular-economy/

Circle of life needs circular economy. A definition. (2019, June 21). Eveline Lemke. https://www.eveline-lemke.com/2019/06/circle-of-life-needs-circular-economy-a-definition/

“E-Waste and Emergence of Recommence Platforms to Drive Market : Refurbished Computers and Laptops.” TimesTech, 3 Jan. 2020, https://timestech.in/e-waste-and-emergence-of-recommence-platforms-to-drive-market-refurbished-computers-and-laptops/.

Kumar, Raj. “Using Refurbished Technology Is the New Solution to E-Waste Management.” India Today, India Today, 6 July 2019, https://www.indiatoday.in/education-today/featurephilia/story/using-refurbished-technology-is-the-new-solution-to-e-waste-management-1563329-2019-07-06.

Larmer, Brook. “E-Waste Offers an Economic Opportunity as Well as Toxicity.” The New York Times, 5 July 2018. NYTimes.com, https://www.nytimes.com/2018/07/05/magazine/e-waste-offers-an-economic-opportunity-as-well-as-toxicity.html.

Lee, Lucy. “CFOs Take Note: The Triple Bottom Line Is the Future.” RoseRyan, 9 Oct. 2012, https://roseryan.com/2012/10/sustainability-at-the-office-of-finance/.

Mitchell, Leila. “IoT and Sustainability: How Sensors Support the Circular Economy.” Auroras s.r.l., 8 Jan. 2018, https://www.auroras.eu/iot-and-sustainability-how-sensors-support-the-circular-economy/.

Mosbergen, Dominique. “So You Recycled Your Old Laptop. Here’s Where It Might’ve Gone.” HuffPost, 20 July 2018, https://www.huffpost.com/entry/old-laptop-recycling_n_5b30d0e2e4b0040e274534a2.

Ryder, Guy, and Houlin Zhao Houlin. “The World’s e-Waste Is a Huge Problem. It’s Also a Golden Opportunity.” World Economic Forum, World Economic Forum, 24 Jan. 2019, https://www.weforum.org/agenda/2019/01/how-a-circular-approach-can-turn-e-waste-into-a-golden-opportunity/.

Stanislaus, Mathy. “5 Ways to Unlock the Value of the Circular Economy.” World Resources Institute, 15 Apr. 2019, https://www.wri.org/blog/2019/04/5-ways-unlock-value-circular-economy.

Theo, Sani. “AI In E-Waste Management: Dig Gold & Burn The Shit! | Smart Recycling.” Electronics For You, 6 Aug. 2019, https://electronicsforu.com/technology-trends/tech-focus/smart-recycling-ai-ewaste-management.

Valerio, Pablo. “Intelligent Assets, a Key Building Block for Circular Economy.” IoT Times, May 2018, https://iot.eetimes.com/intelligent-assets-a-key-building-block-for-circular-economy/.

Wisniewska, Aleksandra. “What Happens to Your Old Laptop? The Growing Problem of e-Waste.” Financial Times, Financial Times, 9 Jan. 2020, https://www.ft.com/content/26e1aa74-2261-11ea-92da-f0c92e957a96.


EqOpTech Inc., located in Los Altos, CA, is a 501(c)(3) IRS-designated tax exempt nonprofit organization that promotes and enables equal opportunity free access to technology for computer learning and STEM education in under-served communities. Visit EqOpTech at www.EqOpTech.org

The Equal Opportunity Technology program is made possible thanks to the Los Altos Community Foundation community grant award. Visit here for more information.

A Virtually New World: VR, AR, AI joining forces

By Alice Liu

Have you ever wanted to go to Mars, explore the ocean, or learn how to fly a plane without actually and physically doing it? Well, with virtual reality (VR), you can do so from the comfort of your own home! From gaming to exposure therapy, VR is breaking all kinds of borders in the world of technology. The unique 3D immersion into a simulated virtual world can allow users to interact with their surroundings through sight, sound, and soon: touch and smell. This innovative technological environment allows users to interact with and experience a visual simulation through multiple devices.

VR, AR: A Total Sensory Experience 

Image credit: Pixabay

Image credit: Pixabay

VR products totally immerse the user into a virtual digital environment by replacing the physical world with a simulation -- which is what users see and feel in VR. Similarly, another breakthrough form of technology on the rise is augmented reality (AR), which involves adding digital content to the real world environment in order to engage and interact with users. AR can be seen everywhere in technology, with one of the most common applications of it being on Snapchat, Pokemon Go, and many other popular apps. Given AR’s accessible features (usually only requiring a camera and smart device), users are able to access apps and tools that use AR from a variety of devices, including the typical smartphone. 

VR, on the other hand, seeks to embody a 3D realistic computer-generated environment -- one that someone can interact with and perceive as real. For a user to feel truly immersed and involved in the digital environment, there needs to be an element of interaction and engagement via vision, sound, touch or other senses. VR can utilize a person’s senses and essentially “manipulate” them into getting comfortable with a totally digital world. To experience in this 3D virtual world, users completely immerse into the virtual world through the use of electronic tools such as a headset, a device, and sometimes a motion tracking gadget or handheld wearables. The abilities for head tracking, eye tracking and motion tracking are important to stimulate a user to see, feel, interact, move around and explore in a virtual world.

“AR inside VR”: Replacing the Controller with your Hands

Mark Zuckerberg talks about the future of VR for Facebook and sees that VR and AR are both interdependent and omnipresent. "If you think about how we use screens, phones are the ones we bring with us, but half of our time with screens is TVs. I think VR is TV and AR is phones," said Zuckerberg. 

As we advance into the future, more and more technological innovations and discoveries come our way. An important development to enhance the VR/AR world is the hand-tracking technology. According to the Oculus Quest (acquired by Facebook for $2 billion) release, “From gesturing and communicating with others to picking up and manipulating objects, our hands play an important role in how we interact with the world—and they’re key to unlocking the feeling of true presence in a virtual space. We first brought your hands into VR with Oculus Touch controllers, so you could engage in VR in a more natural way. Now, we’re taking the next step with hand tracking on Oculus Quest—letting you interact in VR without controllers, using your real hands.”  

Thanks to deep neural networks, the Oculus hand motion technology can predict the accuracy of hand tracking hence eliminating the need for controllers or gloves, while enhancing users' interaction in an intuitive and natural way in the virtual world. For example, in a VR training environment, students are able to interact and communicate with hand gestures without controllers or gloves. Zuckerberg sees its social VR platform as a way to expand into the augmented worlds to “help people come together.” 

Today’s VR Applications

We all know VR as this immersive, intriguing, and relatively new technology commonly used in gaming, but what more potential does VR have? From envisioning architecture to helping with dementia, VR can be applied to a plethora of areas.

Architects and industrial designers commonly use VR to help visualize their designs and builds, which is a big step forward from building scaled models using paper. VR is much more efficient for designers to use and the digital platform makes it cheaper as well. Instead of buying materials to model their builds, they can just design it on a computer screen all while testing the build out for any flaws or safety issues in VR. On top of that, clients or architects can walk through the structure of their design through VR. Virtual reality can provide a much more accurate and complex model than just a scaled-down physical build.

Education and training is also a big way VR is used today. Training and teaching people through virtual reality can be very efficient in making sure they understand what they’re doing while actually experiencing it as if it were real life. In “Virtual Reality Goes To Work, Helping Train Employees”, an NPR article by Yuki Noguchi, she introduces the topic of VR in the workplace with an example about how Walmart is training over 1 million employees through VR. “The sensory immersion is key to its effectiveness,'' she writes, noting the fact that a brain can process the virtual sensory immersion as a real-life activity. This helps someone practice certain tasks through VR as if it were a real experience due to the realistic sensory details. Employers can use the data they gain from a VR experience and see how a person interacts with a certain situation in order to select the perfect candidate. 

Over the past few years, VR has found a new purpose among all its others: improving the lives of those with dementia. A video by Dougal Shaw titled “How virtual reality is helping people with dementia” from the BBC opens at a residential care center in Oxford with a few elders suffering with dementia. They are using VR technology that shows specific, realistic scenes from their past to help them experience what was once theirs. The screens display a vivid scene from their past filmed specifically for people who remembered that event, which is called a “reminiscent experience”. The goal for this virtual reminiscence therapy is to provide stimulation for people. This stimulation can trigger memories which can help with the cognitive ability of the person. 

Joint Power of AI + VR + AR 

The confluence of AR and VR solutions can be further enhanced by Artificial Intelligence (AI) and machine learning (ML), particularly computer vision and natural language processing.  Peter Diamandis, an entrepreneur futurist and investor from Silicon Valley, predicts that the future of fashion and shopping will be heavily influenced and shaped by these three technologies. He sees a new, virtually connected world where virtual shopping can be done anytime, anywhere, with AR glasses “always-on” shopping mode, powered by AI digital smart assistants loaded with personalized data who knows users’ taste, measurements, accessories better than the users themselves. AR and VR will allow shoppers to try on clothes virtually in 3D models wearing the clothes customized to their liking while machine learning algorithms train and help personalize the data and transform insights into decisions. 

With the advent of 5G network connectivity, it is expected to be a complete game-changer that promises to empower AR/VR/AI applications and converge the physical and virtual world in education, work, gaming, vehicles, healthcare and beyond.

==//==

Works Cited:

“AR vs VR: What's the Difference?” Guru99, www.guru99.com/difference-between-ar-vr.html

Bardi, Joe. “What Is Virtual Reality? VR Definition and Examples.” Marxent, 26 Mar. 2019, www.marxentlabs.com/what-is-virtual-reality/

Lacoma, Tyler. “Learn the Basics of VR: Here's Everything You Need to Know about Virtual Reality.” Digital Trends, Digital Trends, 25 Mar. 2018, www.digitaltrends.com/computing/what-is-vr-all-the-basics-of-virtual-reality/

“What Is VR and How Does It Work?” Thinkmobiles, thinkmobiles.com/blog/what-is-vr/

McDowell, Maghan. “A Top Silicon Valley Futurist on How AI, AR and VR Will Shape Fashion's Future.” Vogue Business, 28 Jan. 2020, www.voguebusiness.com/technology/ai-ar-and-vr-shaping-fashions-future-peter-diamandis

Noguchi, Yuki. “Virtual Reality Goes To Work, Helping Train Employees.” NPR, NPR, 8 Oct. 2019, www.npr.org/2019/10/08/767116408/virtual-reality-goes-to-work-helping-train-employees

Shaw, Dougal, director. How Virtual Reality Is Helping People with Dementia. BBC News, BBC, 12 Sept. 2019, www.bbc.com/news/av/business-49654052/how-virtual-reality-is-helping-people-with-dementia

Stein, Scott. “Mark Zuckerberg Sees the Future of AR inside VR like Oculus Quest.” CNET, 25 Sept. 2019, www.cnet.com/features/mark-zuckerberg-sees-the-future-of-ar-inside-vr-like-oculus-quest/

Shangchen Han et al. “Using deep neural networks for accurate hand-tracking on Oculus Quest.” ai.facebook.com Sept. 2019, https://ai.facebook.com/blog/hand-tracking-deep-neural-networks/


EqOpTech Inc., located in Los Altos, CA, is a 501(c)(3) IRS-designated tax exempt nonprofit organization that promotes and enables equal opportunity free access to technology for computer learning and STEM education in under-served communities. Visit EqOpTech at www.EqOpTech.org

The Equal Opportunity Technology program is made possible thanks to the Los Altos Community Foundation community grant award. Visit here for more information.

Machine Doctors and Disease Detectives

By Sarah Yung

Artificial Intelligence (AI) goes all the way back to the 1970s, with Stanford’s prototype AI Mycin used to treat blood infections. Back then, patient records were packed in file boxes, stowed away in a musty closet in the corner of a hospital. Coming years may dramatically change how we handle patient records and data. The advances in computational power and accumulation of massive amounts of data make many clinical problems ripe for AI applications. Machines have the potential to vastly improve medical care, primarily by augmenting the skills of today’s human physicians.

AI in healthcare spans many of the core fields of medicine. From diagnostics to health and wellness to smart devices, technology is making doctors better and more efficient at what they do. Software’s ability to adapt without human intervention will soon make it indispensable in the field of medicine.

Harnessing the Power of Computational Algorithms

analysis-3707159_1920.jpg

Machine learning is improving disease and symptom detection, enabling doctors to give patients the treatment they need. Algorithms are suited for finding patterns and making connections. Medical care is all about finding and treating diseases, which are defined by different symptoms. Machines have a keener eye for these symptoms than humans do - it’s simply what they are designed to do. A machine’s adeptness at pattern-detection may help us leverage a greater amount of patient data. 

In particular, only 3% of cancer patients are enrolled in clinical trials. AI could leverage data from the other 97%, drawing conclusions and devising potential treatments from an untapped source of information. Pharmaceutical companies can use bioinformatics to discover and develop cures and new treatments. Researchers may also be able to determine regions where certain drugs may or may not be effective.

AI could also enable more consistent interpretation of data. Take radiology. CT scans, MRIs, and X-rays all provide an internal view of a patient’s body. However, different experts will interpret these images differently, which could lead to very different plans of treatment. Computers, uninfluenced by emotions or fatigue, could make identifying symptoms and classifying diseases more uniform. Many companies are employing machine learning software to make minor diagnoses from within smartphone apps. More significant diagnoses are made by machine and human working in tandem.

Most algorithms concerned with disease and symptom detection target abnormal cell growth or development of cancer. The Lymph Node Assistant, or LYNA, was created by Google to identify metastatic breast cancer tumors. Compared to human reviewers, LYNA managed to halve the average slide review time. Algorithms can also be used to determine treatment plans. A new computer program developed at the University of Arizona can personalize drug treatments using a patient’s genetic information. The incredible accuracy and efficiency of machines in detecting potential diseases allows doctors to focus on their patients’ treatments.

Medical Solutions in Developing Nations

But beyond disease detection, machine learning enables accessible medical care everywhere. AI brings healthcare to developing countries while transforming healthcare in first world nations. Faced with a different set of problems, developing countries are focused on providing basic services to those in poverty in remote locations. Although we may not be able to scale down advanced AI tech solutions for smaller regional healthcare providers, it can still assist on administrative tasks, allowing physicians to focus wholly on their patient. With 24-hour availability, AI could even reduce the number of appointments that one has to make with their doctor.

Software is also making the larger medical community an available resource for doctors everywhere. Thousands of clinicians from all over the world are collaborating to use and build a diagnostic and management tool known as Human Dx. Doctors can ask a question and upload their info and the software will return a report of aggregated and prioritized responses. Their work will provide professional consultations to support high-value care, even in areas where there aren’t a lot of specialists.

Man and Machine

A robot’s precision makes it a promising candidate for assisting in surgery. In particular, Accenture projects that robot-assisted surgery will have potential annual benefits around $40 billion by 2026. We’re probably not going to see robot brain surgeons for a long while. Surgery is a delicate field, requiring delicate precision and the ability to make decisions on the fly. Robots are simply not adept at handling this complex task like human surgeons. Plus, many people are leery of a robot performing their surgery.

Many consumers are wearing fitness bands or smart watches to keep track of their exercise and sleep. Similar medical grade devices have even greater capabilities. Depending on their design and sophistication, these devices can track a person’s heart rate, oxygen level, breathing, and other data. This provides a wealth of data to healthcare providers not available otherwise between patients’ appointments. Wearable technology can also alert users immediately if there is a potential problem, increasing the chance that they will get the care they need.

There is a lot of budding interest in AI-enabled systems that can be integrated with the human body. Cardiology is a difficult field to make advances in, given the life and death stakes inherent in heart conditions. However, scientists are developing an implantable defibrillator that monitors heart rhythms of at-risk patients, and can administer a shock if necessary. There are also potential applications for artificial intelligence in brain-computer interfaces. AI networks are modeled after the human brain’s function. Researchers hope that brain-computer interfaces can replace other types of computer interfaces. This could be particularly helpful for people with permanent or temporary disabilities.

Records and Research

Although doctors have mostly moved out of the paper world, keeping health records is still a tedious task. The use of electronic health records is pervasive in the medical world, but requires a lot of extra work on the part of doctors and medical assistants. Video-based image recognition may be able to handle the bulk of this task, and add their own extra insight to EHRs, filling in the blanks that humans may miss.

Natural language processing will allow voice recognition capabilities to replace keyboards, removing the need for manual entry. Reworking the record-keeping system would transfer time-consuming tasks to software, reducing the human labor and the costs associated with modern healthcare.

Artificial intelligence also has the capability to transform clinical trials. Traditional research and development is a lengthy and expensive process. Machine learning can analyze and process information about relevant compounds far faster than conventional methods, saving the company time and the cost of manual labor. Dozens of health and pharmaceutical companies are leveraging new technology to help drug discovery and reduce the time it takes to bring drugs to market. With Johnson & Johnson, IBM Watson is taking natural language processing into more pioneering fields. Their collaboration focuses on utilizing natural language processing to analyze medical papers to aid in drug development.

The Black Box Algorithm

Our new technology is not perfect, however. We cannot rely on software and systems to manage patient care at its current state. One significant risk associated with the use of AI systems is bias. Biased results stem from the biases of humans involved in creating and training the algorithm, whether their bias is intentional or unintentional. If a self-learning system is trained on biased or flawed data, it could make erroneous recommendations or decisions. And in medicine, unlike in finance or retail, one decision can be the difference between life and death. Implementing AI will certainly require new ethics rules to address and prevent bias around AI.

However, there are a lot of barriers to overcome before medical professionals would consider fully relying on software. The regulations imposed upon both the algorithms and clinical trials pose an obstacle for artificial intelligence to gain a foothold in medicine. Many algorithms rely on intricate, difficult to deconvolute mathematics, often referred to as a black box. This ‘black box’ makes it difficult to maintain transparency surrounding researchers’ scientific methods. It’s hard for doctors and patients alike to trust a mysterious program without knowing how it works. Many patients are also harbor privacy concerns regarding their data being used and analyzed by robots. Although security is constantly being improved, data breaches sadly continue to be a common occurrence. For now, algorithms cannot operate independently in clinics.

The Art of Medicine

A growing machine presence in medicine may pose a risk to doctors and patients alike. As technologies driven by profit develop the technology behind advanced medical care, some fear how it will affect the human touch in medicine. As surgical as the field may seem, patients also benefit greatly from reassurance from a medical professional, a reminder that somebody besides their family cares about their wellbeing. The biggest application of AI will be in freeing up doctors to truly connect with patients and do things that matter.

Optimism must be tempered with a healthy dose of caution.  Technology has equal potential to close and widen disparities.  In the fever surrounding medical applications of AI, we must take care to protect patient rights.  Patients need to be aware of what their data is being used for and be informed of the algorithm being used on them.  It also isn’t clear yet who will benefit from AI health care. Patients and healthcare systems could benefit, or money could simply flow to tech companies and health care providers.

We must be mindful to avoid a so-called “health care apartheid,” where those of more modest means turn to rely solely on robot doctors.  It would be unsafe to blindly trust the predictions made by deep-learning software. Even large datasets cannot shield us from errors made when researchers apply their algorithms to a new population.  While these systems can apply rigid algorithms to better decision-making, they need the general intelligence of humans to correct possibly harmful predictions with major health and financial consequences.

Luckily, technology will be working with us, not against us. New technology usually doesn’t solve problems - it just makes us better at what we do. It’s still up to humans to put their newfound abilities to the task. We need both machine and human intelligence, to truly make an impact.  If implemented properly, AI can better clinical decision support for physicians and empower patients in preventive medicine. Most powerfully, AI may be able to have a truly life-changing effect by restoring the care in health care.

Sources:

“AI And Healthcare: A Giant Opportunity.” Forbes, https://www.forbes.com/sites/insights-intelai/2019/02/11/ai-and-healthcare-a-giant-opportunity/

Arsene, Codrin. “Artificial Intelligence in Healthcare: The Future Is Amazing.” Healthcare Weekly, 18 Mar. 2019, https://healthcareweekly.com/artificial-intelligence-in-healthcare/.

Arsene, Codrin. “How Artificial Intelligence Can Improve Clinical Trials.” Healthcare Weekly, 3 Mar. 2019, https://healthcareweekly.com/artificial-intelligence-clinical-trials/.

Artificial Intelligence in Medicine | Machine Learning. https://www.ibm.com/watson-health/learn/artificial-intelligence-medicine

Brinker, Titus J., et al. “Deep Learning Outperformed 136 of 157 Dermatologists in a Head-to-Head Dermoscopic Melanoma Image Classification Task.” European Journal of Cancer, vol. 113, May 2019, pp. 47–54. DOI.org (Crossref), doi:10.1016/j.ejca.2019.04.001.

Greenfield, Daniel. “Artificial Intelligence in Medicine: Applications, Implications, and Limitations.” Science in the News, 19 June 2019, http://sitn.hms.harvard.edu/flash/2019/artificial-intelligence-in-medicine-applications-implications-and-limitations/.

Harris, Richard. “As Artificial Intelligence Moves Into Medicine, The Human Touch Could Be A Casualty.” NPR.Org, https://www.npr.org/sections/health-shots/2019/04/30/718413798/as-artificial-intelligence-moves-into-medicine-the-human-touch-could-be-a-casual

Hsu, Jeremy. “Will Artificial Intelligence Improve Health Care for Everyone?” Smithsonian, https://www.smithsonianmag.com/innovation/will-artificial-intelligence-improve-health-care-for-everyone-180972758/

Krisberg, Kim. Artificial Intelligence Transforms the Future of Medicine. https://news.aamc.org/research/article/artificial-intelligence-transforms-future-medicine/

Martin, Nicole. “How Healthcare Is Using Big Data And AI To Cure Disease.” Forbes, https://www.forbes.com/sites/nicolemartin1/2019/08/30/how-healthcare-is-using-big-data-and-ai-to-cure-disease/.

Morgan, Lisa. “Artificial Intelligence in Healthcare: How AI Shapes Medicine.” Datamation, https://www.datamation.com/artificial-intelligence/artificial-intelligence-in-healthcare.html. 

O’Connor, Anahad. “How Artificial Intelligence Could Transform Medicine.” The New York Times, 11 Mar. 2019. NYTimes.com, https://www.nytimes.com/2019/03/11/well/live/how-artificial-intelligence-could-transform-medicine.html.

Phaneuf, Alicia. “AI and Machine Learning Are Changing Our Approach to Medicine and the Future of Healthcare.” Business Insider, https://www.businessinsider.com/artificial-intelligence-healthcare



Our Equal Opportunity Technology program is made possible thanks to Los Altos Community Foundation community grant award. Visit here for more information.

Intelligent Money

By Sarah Yung

Blue-jacketed market makers bustle across the New York Stock Exchange floor from opening to closing, standing ready to buy and sell stocks listed on the exchange. NYSE’s human traders are the face of Wall Street, but they may soon become obsolete. Fintech - the integration of technology into financial services - is a quickly growing field today that threatens to flood the financial industry.

A Machine World - Convergence of AI and Fintech

stock-exchange-1426331_1920.jpg

Analysts predict a torrent of Artificial Intelligence (“AI”) will soon sweep through the industry, driving companies to drop their high-earning traders in favor of machines. Financial giants have slowly been integrating AI-driven systems, which can foresee market trends and make trades better than humans. Machine learning algorithms simply excel at analyzing data, regardless of size and density. Algorithms can detect patterns that are difficult for humans to spot, and can process information fast enough to make short-term trades. For example, the algorithm can use price movement in the S&P 500 index to predict moves in individual stocks and then make trades. A flood of AI-based technology will displace many traders earning up to millions of dollars.

In a few decades, analysts predict that 90,000 out of 300,000 jobs in asset management will be lost to AI. However, society as a whole may benefit from this change. Wall Street attracts some of the most brilliant minds in society. About ⅓ of graduates from the top 10 business schools go into finance. As active managers divert money from human equity analysts to engineers, quants will be incentivized to seek work in other fields. Bright graduates who went to Wall Street can occupy openings in other fields like healthcare and energy, and as well as joining nonprofits. This could lead to advances in these fields that could tangibly benefit many people.

Digital Wealth Management

Although we haven’t yet reached that point, innovative new companies bring us closer and closer. A number of completely AI-based hedge funds have emerged in the last few years, among them Sentient and Numerai. Although many companies are integrating artificial intelligence into their operations, they are reluctant to hand over full control to machines. Only a few pioneering hedge funds like Sentient and Numerai have fully automated processes. 

Machine learning models open up new methods to make predictions and draw conclusions. Satellite image recognition can give insight into real-time data points like parking lot traffic, using this data to derive business insights like frequency of shopping at specific stores. Advanced natural language processing techniques can study the mood of a news article of financial review and quickly analyze a company’s financial reports. This condenses large sets of text data into key points of interest that are easy for researchers and analysts to leverage. 

There is a growing interest in quantitative trading - using large data sets to identify patterns that can be applied to trading. Although most companies aren’t fully automated, many are integrating new technology into their structure. Alpaca, based in San Mateo, California, combines deep learning and high-speed data storage to identify patterns in market price changes. They recently partnered with news giant Bloomberg to provide software that predicts short-term forecasts in real-time for major markets.

Finance is an ideal breeding ground for automated processes - it has a vast amount of publicly available data. The increase in computational power over the last decade or so makes these fields a good match for each other. Companies and investors from both financial and AI sectors are cautiously optimistic about the future of machines in finance. However, only time will tell whether AI is truly the best route to go in the financial sector. The ultimate future of AI will depend on its ability to turn a profit.

Prediction v. Judgement

Although technology is constantly improving, artificial intelligence still needs a human touch to keep it on track. Modern software struggles with predicting crises because every crisis is unique. It needs a wealth of historical data to make comparisons to and then make a prediction. Fund managers play an important role in the integration of AI, using their instinct to guide machines. Still, AI will make waves in the financial sector with its ability to refine and improve human predictions. 

Profiling clients based on their risk score is a crucial ability for financial institutions. AI is an excellent tool for banks and insurance companies because it can automate categorization of clients based on their risk profile. Advisors can associate financial products with each risk profile. From there, they can then optimize product recommendations for clients.

Similarly, technology can be applied to develop valuation models for investment and banking in general. Such models can calculate the valuation of an asset using surrounding data points and historical examples. This model is traditionally used in real estate, where it can be trained on previous sales transactions, but it can be utilized in financial firms as well, using economic indicators and growth predictions, among others, to predict the value of the company and its assets.

Although we are fast entering a world that functions on computers, humans will still play a big role in the era of AI. Fund managers, in particular, are critical to the implementation of machines into a firm’s day-to-day operations. Because they rely on historical data, machines are not trained to anticipate or respond to events that haven’t happened before. Every crisis is unique, requiring a human touch to guide technology through stormy seas. A manager’s intuition about economic trends are the foundation of long-term strategies. Machines can find patterns and make predictions, but the role of human intuition in guiding and refining their predictions is equally critical to the process.

AI Risk Management

Many of life’s necessities - like landing a job and renting an apartment - hinge on having good credit. Banks and credit lenders are using artificial intelligence solutions to more accurately assess borrowers in the credit evaluation and approval process. ZestFinance is the maker of the Zest Automated Machine Learning (ZAML) platform, which helps companies assess borrowers that have a paucity of credit information or history. Scienaptic Systems is another company that runs an underwriting platform for banks and credit institutions. In just three weeks with a major credit card company, Scienaptic achieved $151 million in loss savings.

Accurate and timely forecasts are crucial to many businesses in the finance world. Financial markets are using machine learning to create more nimble models. These predictions can be used to leverage existing data, helping financial experts pinpoint trends and identify risks while conserving manpower. Financial institutions like J.P. Morgan, Bank of America, and Morgan Stanley are integrating machine intelligence and data analytics into their operations. In March 2018, S&P Global announced a deal to acquire Kensho for about $550 million. Kensho’s software uses a combination of cloud computing and natural language processing to answer complex financial questions. Ayasdi is another company deploying software to understand and manage risk.

Personalized Banking

Banks are also joining the technology craze. Like in retail, many banks are looking to use AI in chatbot software, increasing customer satisfaction and efficiency without the expense of hiring extra customer service workers. A study of 33,000 banking customers found 54% want tools to help monitor budget and make real-time spending adjustments. Using AI to learn from customers can help create a better banking experience for all.

Trim is a smart app that helps users save money by analyzing spending. The app can cancel subscriptions, find alternative options for services like insurance, and even negotiate bills. Trim has saved $6.3 million for over 50,000 people. Sun Life created a virtual assistant, Ella, which sent users reminders to allow them to stay on top of their insurance plans. Using computers to interact with customers is not new - chatbots are a new approach to automated customer service because they can cope with a huge variety of unstructured responses, and are continually refining how they interact with consumers.

Financial institutions like Bank of America are also instituting smart technology in the hope that this software will maintain and increase customer loyalty. Bank of America uses a bot called Erica as a digital financial assistant. The bot enables users to search their historical data for a specific transaction and computes the total amounts of credit and debt - two tasks that were time-consuming for users. JPMorgan and Chase are also increasing their connectivity through launching a mobile banking app, which makes them accessible from anywhere at anytime of day. 

Customer Satisfaction and Engagement

Artificial intelligence can also be used to document customer information in a timely and efficient manner, drastically improving the user experience. Many are familiar with the processes in the insurance industry. Clients subscribe insurance, for which they pay. However, the process for activating their coverage in the case of an incident is often lengthy and complicated. Transactional bots can make this process much less painful for users. A transactional bot would walk the customer through the process, taking in photos and videos of the damage, and other information required for processing the claim. The bot could also run the application through fraud detection and provide potential values for payout. 

Having a bot in charge of the entire cycle can reduce costs and operational tasks for the company and cut errors overall. Features like image recognition, fraud detection, and payout prediction upgrade the entire user journey, improving the experience for both users and the insurance company. Lemonade, a New York-based insurance startup, is leading the charge on this front. Their motto - “Forget everything you about insurance” - signals how they are going to disrupt the industry through the use of AI. Since their creation in 2015, they have raised over $180 million. The Chinese financial services group Ping An is incorporating similar software that can offer a while-you-wait quote to settle the claim.

Retaining clients is key ability in all industries and businesses. AI can support managers in this aspect by analyzing clients for signs that they are considering cancelling their policy. By providing a prioritized list based on client behavior, AI. The manager can leverage this list to provide better service and improved products to higher priority clients.

Cybersecurity and Fraud Detection

One of the most powerful applications of artificial intelligence comes in the ability for fraud detection and prevention. Huge quantities of digital transactions take place via online accounts and applications, and it is impossible for humans to monitor all of these transfers. There is an urgent need to ramp up cybersecurity and fraud detection efforts. Darktrace creates cybersecurity solutions for financial institutions. The company’s machine learning platform analyzes network data to detect suspicious activity before it can damage a financial form.

Computers may also be able to leverage human behavior to detect potential instances of fraud. Although micro-expressions are not infallible, they can be incorporated into fraud detection algorithms. Technology can also spot other patterns of potentially fraudulent behavior early on. For example, GoCompare, in partnership with analytics company Featurespace, can detect suspicious behavior like repeated changes to name, employment, or postcode, and block the transaction or raise an alert.

Citi Ventures, a private equity firm, is venturing deep into the fields of artificial intelligence, big data analysis, and machine learning. They have made multiple investments into companies deploying machine learning in new and innovative ways. One company Citi Ventures has invested in, Feedzai, is able to scan large amounts of data and recognize threats as they emerge, sending real-time alerts to customers. Citi Ventures continues to have an active presence in fintech, investing in companies focused on topics ranging from cybersecurity to real estate. 

MasterCard also aims to increase convenience while reducing the risks of fraud and cybercrime. However, they must be mindful to avoid flagging genuine transactions as fraudulent. MasterCard came up with their Design Intelligence platform to reduce false declines and make fraud detection more accurate. They acquired the AI company Brighterion as part of their mission to make all online payments fraud-free. As time passes, the self-teaching algorithms should make better decisions regarding fraud detection.

AI-Powered Blockchain Smart Contracts

One of the most powerful applications of AI comes in its combination with Blockchain, a new system for storing and tracking digital information - utilizing an encrypted, distributed ledger format. In Blockchain, data is encrypted and distributed across multiple computers, making highly robust databases that can only be accessed by those with permission. 

Applying machine learning to consumer actions, like filling out contracts and submitting incident reports, often brings up the question of user privacy and security. Relevant financial data is often also sensitive data. Increasing use of Blockchain combined with AI-algorithms enables our software to better predict and detect fraudulent financial transactions and build trust between contracting parties.

Human Intelligence Prevails

In a globally connected world, there is an urgent need for automated analysis that far exceeds human abilities. The rapid evolution of computing tech, providing advanced analytical capabilities at lower and lower costs, makes it more and more attractive. Ultimately, automation allows employees to focus their energy on revenue-generating activities and customer concerns. However, while technology is propagating rapidly into many fields, humans are still in the driver’s seat.

Trust is still critical for anything to happen. Even the most accurate algorithms could go unused if customers didn’t trust the algorithm or the company creating the algorithm. This requires, to some degree, establishing a personal relationship, which robots are not capable of yet. People are wired to look to others to confirm they are making the “right decision,” whether it comes to cars or stocks. It’s a lot easier to trust a human than a faceless computer. The human element with regards to technology makes humans so much more useful. Ultimately, human contributions to the field are just as critical as technology’s, if not more.

Sources:

Detrixhe, John. “Why Robot Traders Haven’t Replaced All the Humans at the New York Stock Exchange—Yet.” Quartz, https://qz.com/1078602/why-the-new-york-stock-exchange-nyse-still-has-human-brokers-on-the-trading-floor/.

Dua, Amit. Artificial Intelligence Use Cases in FinTech. https://www.datascience.com/blog/artificial-intelligence-use-cases-in-fintech.

Groenfeldt, Tom. “Citi Ventures Deploys Machine Learning And Artificial Intelligence With People.” Forbes, https://www.forbes.com/sites/tomgroenfeldt/2016/10/31/citi-ventures-deploys-machine-learning-and-artificial-intelligence-with-people/.

“How AI Will Invade Every Corner of Wall Street.” Bloomberg.Com, 5 Dec. 2017. www.bloomberg.com, https://www.bloomberg.com/news/features/2017-12-05/how-ai-will-invade-every-corner-of-wall-street.

Hudson, Corbin. “Ten Applications of AI to Fintech.” Medium, 28 Nov. 2018, https://towardsdatascience.com/ten-applications-of-ai-to-fintech-22d626c2fdac.

Kulnigg, Thomas. “Combining Blockchain and AI to Make Smart Contracts Smarter.” Schoenherr, https://www.schoenherr.eu/publications/publication-detail/combining-blockchain-and-ai-to-make-smart-contracts-smarter/.

Maney, Kevin. “How Artificial Intelligence Will Transform Wall Street.” Newsweek, 26 Feb. 2017, https://www.newsweek.com/how-artificial-intelligence-transform-wall-street-560637.

Marr, Bernard. “Artificial Intelligence And Blockchain: 3 Major Benefits Of Combining These Two Mega-Trends.” Forbes, https://www.forbes.com/sites/bernardmarr/2018/03/02/artificial-intelligence-and-blockchain-3-major-benefits-of-combining-these-two-mega-trends/.

Maskey, Sameer. “How Artificial Intelligence Is Helping Financial Institutions.” Forbes, https://www.forbes.com/sites/forbestechcouncil/2018/12/05/how-artificial-intelligence-is-helping-financial-institutions/.

McPartland, Kevin. “Robots Have Not Taken Over Wall Street.” Forbes, https://www.forbes.com/sites/kevinmcpartland/2019/02/04/robots-have-not-taken-over-wall-street/.

Pickford, James, and Lucy Warwick-Ching. “How AI Will Change the Way You Manage Your Money.” Financial Times, https://www.ft.com/content/37ca12d8-b90a-11e9-8a88-aa6628ac896c.

Ryan, Philip. “Citi Ventures on the Lookout for Machine Learning Startups in 2018.” Bank Innovation, https://bankinnovation.net/allposts/operations/comp-reg/citi-ventures-on-the-lookout-for-machine-learning-startups-in-2018/.

Santariano, Adam. “Silicon Valley Hedge Fund Takes On Wall Street With AI Trader.” Bloomberg.Com, 6 Feb. 2017. www.bloomberg.com, https://www.bloomberg.com/news/articles/2017-02-06/silicon-valley-hedge-fund-takes-on-wall-street-with-ai-trader.

Schroer, Alyssa. “AI and the Bottom Line: 15 Examples of Artificial Intelligence in Finance.” Built In, https://builtin.com/artificial-intelligence/ai-finance-banking-applications-companies.


Our Equal Opportunity Technology program is made possible thanks to Los Altos Community Foundation community grant award. Visit here for more information.

The Machine Edge

By Sarah Yung

Artificial intelligence is disrupting industries around the world in new and profound ways. Although movies depict AI as technology of the future, it is already behind the scenes in the entertainment and retail industries. Many companies are beginning to explore the world of deep learning and artificial intelligence and its potential to improve customer engagement. Take Netflix, an incredibly successful Internet entertainment service. Netflix utilizes algorithms rooted in machine learning to offer a personalized recommendations system. Much of their success can be attributed to their close engagement with modern technological developments.

Data Deep Learning = Insight = Value

To improve their recommendation system, Netflix must sift through vast amounts of data, using research in a field known as “big data.” Netflix trains their software by feeding massive amounts of information to neural networks, which mimic how the human brain identifies patterns. As one of Amazon Web Services’ largest customers, it is only fitting that Netflix takes advantage of Amazon’s powerful cloud infrastructure to train their machines. This technology can then be used to analyze movies and TV shows and determine how different users would respond.

artificial-neural-network-3501528_1920.png

Netflix’s Research and Engineering department develops software for all sorts of tasks from personalizing the Netflix homepage to choosing which artwork Netflix will use to present each movie or series. This team, in addition to developing algorithms, tests them on users to study their effectiveness. They track two user groups - one using the current service and one using a new recommending software - then analyze the long-term metrics. One of the most telling statistics is whether people stay subscribed over time. Since Netflix offers a monthly subscription, data is collected over a relatively long period of time. 

Of course, Netflix isn’t the only player in the deep learning game. Facebook, IBM, and Google are also making headway into this cutting edge field. Spotify, in particular, utilizes developments in the fields of big data and artificial intelligence to create the personalized playlists that they have become renowned for. For example, their Discover Weekly playlist is described as a “best friend creating a personalized mixtape.” Spotify is a data-driven company, continually acquiring data points, and they have begun using machines to manage that data to find new connections. Connections found by computers sifting through massive amounts of data are key to creating playlists like Discover Weekly.

Recently, Francois Pachet, an expert on machine-composed music, joined the Spotify team. As we discussed in a previous post - “Machine Arts” - creative computers are far more prolific than human artists. For example, the Microsoft computer XiaoIce produced over 10,000 poems in 2,760 hours, far faster than any human author. However, Spotify says they want to “focus on making tools to help artists in their creative process.” In July of 2018, Pachet released Hello World through the label Flow Records, the first music album composed with artificial intelligence. Spotify has rolled out other tools like Spotify for Artists and Fans First to help artists better understand their fan base and adjust their online presence accordingly.

Spotify continues to humanize the massive amounts of data they collect in innovative ways, like in their global ad campaign highlighting bizarre user habits. Their creative use of machine learning will continue long into the future, strengthened by acquisitions of several companies in the deep learning field. Continued investment in these technologies will allow them to glean valuable insights from their massive amounts of data - not just odd user habits.

The Power of AI in Digital Marketing

AI is also a powerful tool for those seeking to market their product or service to the public. In the age of social media, there is no better way to reach a large customer base quickly. But good publicity requires an effective advertisement directed at the correct population. With such a massive amount of data available so quickly, companies must be able to analyze it quickly. Many companies are turning to machine learning in the era of big data. Insights from data analysis give companies many resources like customized buyer profiles and optimized content based on those personas. By 2021, there is an expected spending of about $57 billion on AI platforms that can perform this data analysis at ever-increasing rates.

Using artificial intelligence allows retailers to leverage the fast-growing Internet of Things. Both the Internet of Things - a network of computers embedded in everyday objects - and social media are fast becoming an important part of our lifestyle, and will soon be very important to businesses. Data collected from users’ social media habits and daily routines can be used to create personalized advertisements, powerful tools for attracting potential customers.

Once they’ve attracted customers, retailers can also use this technology to share personalized content in real-time. It can also analyze ad performance and create targeted ads draws consumers to the retailer. Once the consumer is on the site, the software can alter the site to keep the consumer engaged - offering discounts and pulling items to the front based on their personal interests. Machines are working 24/7, unlike humans, and can make many subtle changes to websites to increase the odds of making a sale, giving in-touch retailers and edge over their competitors. A personalized website experience, tailored to the consumer’s interests and with useful customer service - perhaps facilitated by a chatbot - can turn potential customers into loyal customers. 

As in every generation, innovation will quickly become the difference between businesses that sink and businesses that float. Successful companies in today’s world almost all have a strong online presence, enabling them to attract a loyal customer base. Companies like Sears and Toys-”R”-Us were not able to adapt to new ways of interacting with customers failed to establish a strong online presence, and that led to their ultimate demise. However, companies like ASOS and North Face grew substantially due to their incorporation of new technology. Both incorporated a virtual assistant into their websites, which improved the customer experience by offering personalized recommendations. 

The Machine Who Knew Too Much

But how much data are you willing to give these companies? Personalized recommendations are nice, but not at the expense of your privacy. Machines require data to make predictions from in order to make the targeted advertisements that are such a powerful tool. 

Google leverages their prevalence in our lives to give their customers more visibility. My mom tells stories of researching softball gloves for my budding interest, then later receiving ads for softball equipment. No human was keeping track of her searches. Google picked up on keywords in her searches and used those to create targeted ads that they later displayed.

While Google taps into this valuable resource behind the scenes, Amazon pays consumers for access to this data vein. On their Prime Day bonanza, Amazon offered a deal. If users downloaded the Amazon Assistant app, they would receive a $10 credit. Amazon Assistant is a browser extension, shopping assistant, and recommendation tool. The assistant also allows Amazon access to users’ browser data. A spokesperson for Amazon said that the assistant is completely optional, complying with their privacy policy. 

Many are leery of the “buying” and “selling” of personal data. But this isn’t a typical marketplace and cannot be interpreted as one. People don’t own their data, and big companies aren’t technically buying or selling it. They use data as an indirect revenue-generating strategy - selling ever-improving targeted advertising. Much of our daily lives has moved into the cloud, and we are still defining the boundaries of user privacy in this uncharted territory.

Tapping into IoT

Integrating into the age of technology is no easy feat, but will allow those pioneering companies to reap the benefits. In the era of connectivity, edge computing - sometimes called IoT - will be everywhere. Edge computing is the practice of processing data where the data is generated instead of sending it to a data center. The explosion of mobile phones and other smart devices (the so-called Internet of Things) producing massive amounts of data have made edge computing a powerful tool.

Machine learning and increased computing power makes these “edge devices” extraordinarily smart, and they are constantly improving. On-device AI can provide real-time insights and predictive analyses - enabling features that are incredibly attractive to consumers. Its reliability does not depend on network availability, since data is processed on the device, making data processing instantaneous. Consumers also benefit from increased security because sensitive data is kept on the device.

The rise of 5G networks make edge computing a very exciting development. 5G offers significantly higher data rates and system capacity while reducing time and cost of transferring data. This will enable edge devices to not only process their own data, but communicate and share data with other devices. The possibilities afforded by rapid and inexpensive connectivity are taking every industry by storm. Companies that embrace AI in this new age will disrupt their industries in profound ways, while those who don’t will be left behind.

Sources:

“5 Ways AI Creates a Personalized Digital Experience.” Multichannel Merchant, 31 May 2019, https://multichannelmerchant.com/blog/5-ways-ai-creates-personalized-digital-experience/.

Chen, Alex, et al. “Distributed Neural Networks with GPUs in the AWS Cloud.” Medium, 19 Apr. 2017, https://medium.com/netflix-techblog/distributed-neural-networks-with-gpus-in-the-aws-cloud-ccf71e82056b.

“François Pachet - Director of Spotify Creator Technology Research Lab.” Francoispachet.Fr, https://www.francoispachet.fr/.

Fussell, Sidney. “What Amazon Thinks You’re Worth.” The Atlantic, 18 July 2019, https://www.theatlantic.com/technology/archive/2019/07/amazon-pays-users-access-browser-data/594199/.

Harrison, Kate. “4 Ways Artificial Intelligence Can Improve Your Marketing (Plus 10 Provider Suggestions).” Forbes, https://www.forbes.com/sites/kateharrison/2019/01/20/5-ways-artificial-intelligence-can-improve-your-marketing-plus-10-provider-suggestions/.

“How AI-Driven Content Improves Personalization and Digital Experiences.” RIS News, https://risnews.com/how-ai-driven-content-improves-personalization-and-digital-experiences.

“How Netflix’s Recommendations System Works.” Help Center, https://help.netflix.com/en/node/100639. Accessed 21 Aug. 2019.

Marr, Bernard. “The Amazing Ways Spotify Uses Big Data, AI And Machine Learning To Drive Business Success.” Forbes, https://www.forbes.com/sites/bernardmarr/2017/10/30/the-amazing-ways-spotify-uses-big-data-ai-and-machine-learning-to-drive-business-success/.

Morgan, Blake. “The 7 Best Examples Of Artificial Intelligence To Improve Personalization.” Forbes, https://www.forbes.com/sites/blakemorgan/2019/01/24/the-7-best-examples-of-artificial-intelligence-to-improve-personalization/.

Russell, Kyle. “Netflix Is ‘Training’ Its Recommendation System By Using Amazon’s Cloud To Mimic The Human Brain.” Business Insider, https://www.businessinsider.com/netflix-using-ai-to-suggest-better-films-2014-2.

Toh, Allison. “How Netflix Uses AI to Find Your Next Binge-Worthy Show.” The Official NVIDIA Blog, 1 June 2018, https://blogs.nvidia.com/blog/2018/06/01/how-netflix-uses-ai/.

“Why AI and Edge Computing Is Capturing so Much Attention.” VentureBeat, 10 Apr. 2019, https://venturebeat.com/2019/04/10/why-ai-and-edge-computing-is-capturing-so-much-attention/.


Our Equal Opportunity Technology program is made possible thanks to Los Altos Community Foundation community grant award. Visit here for more information.

Intelligent Transportation

By Sarah Yung

“The brakes! Hit the brakes!” My driving instructor yelled frantically as I came screeching to a halt in front of a stop sign. I shifted nervously as he adjusted his seat belt from the sudden stop. Between checking the mirrors, staying on my side of the road, and keeping my hands at 3 and 9, I forgot to keep my eyes up for street signs.  Luckily, I had a second pair of eyes with me. Though, according to my parents, it gets easier with time, every driver has moments like this. No driver is infallible, but eventually, we lose that extra pair of eyes. Many people - students seeking the freedom of a car, workers yawning their way through morning commute - could benefit from vehicles that could handle themselves, with no need of another pair of eyes.

Concept of Self-driving car - Credit: Dreamstime

Concept of Self-driving car - Credit: Dreamstime

Driverless cars are no longer an unrealistic feature of science fiction films - they are a very real facet of today’s society. Self-driving cars log millions of miles on public roads in states like California, Florida, and Michigan. Google cars - a distinctive dome-like sensor perched on the car roof - cruising through the streets are a common sight in the Silicon Valley, although drivers are still impatient behind the one car on the road actually driving the speed limit. Autonomous features, however, are already in the market. Features like assisted parking are invaluable to today’s drivers, and are based on artificial intelligence.

Automakers and tech giants are pouring billions into this budding industry. Many automakers want to be top dogs when self-driving cars enter the market. In 2015, Volvo became the first automobile manufacturer to accept full liability for autonomous vehicles. Soon after, GM acquired Cruise Automation, a company that develops and tests self-driving vehicles. BMW followed by opening a facility outside Munich to work on autonomous vehicles. Google and Tesla are leaders in developing this technology, although they take different approaches. 

On one hand, Google uses lidar sensor technology to dive straight into cars without steering wheels or pedals. Lidar, or light detection and ranging, is a remote sensing system. Similar to radar, lidar uses pulses of waves to scan its surroundings; instead of high-energy electromagnetic waves, however, lidar uses light in the form of a pulsed laser. On the other hand, Tesla takes a more moderate approach, rolling out Autopilot and self parking features to their cars on the market. Tesla’s Autopilot software “enables your car to steer, accelerate and brake automatically within its lane.” Although Autopilot still requires the driver to remain attentive and prepared, it uses technology and algorithms very similar to those that will one day be implemented in fully autonomous vehicles.

Three technologies are key to the success of self-driving vehicles. Sensors, including radar, ultrasonics, cameras, and lidar, give the machine information about its immediate environment, enabling the machine to navigate the car safely. Connectivity gives the car other information about its environment - weather, traffic conditions, and road infrastructure.  The emergence of 5G wireless technology will support rapid and consistent connectivity between vehicles and the network. 5G cellular networks’ primary benefit is improved speed compared to 4G networks - with latency dropping to around 10 ms from 50 ms. Cars can connect to each other with this technology, adding another safety measure to prevent collisions. Finally, software and control algorithms tie it all together by capturing data from sensors and connectivity and making decisions concerning steering, braking, and acceleration. Though it sounds easy enough, the algorithms must be able to handle both simple and complex driving situations robustly to be implemented safely on the road.

Much of this technology is already in use on the road. Many modern navigational tools give the driver real-time route optimization by analyzing traffic conditions in possible routes ahead. Today, many cars incorporate semi-autonomous features also known as advanced driver-assistance systems (ADAS). ADAS includes functions like emergency brakings, cruise control, and lane-departure warnings. Machine-based systems have an advantage over human drivers in this regard because they are not affected by fatigue or human emotions. As technology advances, these machines will develop a more efficient structure and become more sophisticated in their response to a variety of environments. Auto manufacturers continue to take incremental steps towards full autonomy, where each component is controlled by a central computer.

Autonomous vehicles bring many potential benefits. Having a unified network of self-driving cars will lead to increased lane capacity and reduced energy consumption. Self-driving vehicles will also be able to perform real-time route optimization, cutting down travel time. As we integrate more self-driving vehicles into society, computers will receive more and more data which allows them to better optimize their decisions, further increasing the benefits. Autonomous vehicles will provide transport for people who can’t drive themselves, like the elderly and infirm. However, the primary benefit of autonomous vehicles is the improved safety. Machines can more easily scan in every direction and are alert all the time. Many other benefits revolve around increased safety - reducing insurance costs, environmental impact, strain on emergency response, and toll on human life dramatically.

However, autonomous vehicles are not infallible. Collisions will inevitably occur when putting such heavy machines on the road. Those developing self-driving cars face numerous moral dilemmas when it comes to these collisions.  MIT tackles these debates head-on with the “Moral Machine.” The Moral Machine is “a platform for gathering a human perspective on moral decisions made by machine intelligence, such as self-driving cars.” The Moral Machine has the user act as the “brain” of a driverless car and choose what they consider to be the lesser of two evils.  The basic scenario is choosing between hitting a pedestrian or driving into an obstacle and injuring the passenger. The Moral Machine has many scenarios with different types of people - varying based on gender, disabilities, occupation, etc. The ethical dilemma about the relative values of human life will remain at the forefront of machine intelligence development far into the foreseeable future.

Public perception of self-driving cars is a major hurdle in putting them into the market. In 2018, A collision in Tempe, Arizona, which killed a pedestrian, caused an uptick in public skepticism of autonomous vehicles. Public skepticism has remained fairly consistent in the past few years, despite the huge investments in the market. Tech giants and auto manufacturers are acknowledging the problem. Waymo joined with Cruise Automation and 22 other organizations to form the Partnership for Automated Vehicle Education (Pave), which aims to ease consumer concerns about self-driving vehicles. PAVE will set up self-driving test rides and educational workshops, as well as develop informational materials. They hope that this will not only inform the public, but bring policymakers to the table. Proponents of self-driving cars need policymakers to unify the rules of the road and to establish rules for self-driving cars. Machines excel at following the rules - we just need to decide on what those rules are.

55% of Americans believe that self-driving cars will take the road by 2029. Despite the extraordinary potential benefits, many fear the unknown technology and its potential ramifications for safety. As automobile companies continue to perfect their technology, self-driving cars will become more sophisticated in their response to various conditions. And someday, they may take the road next to human drivers.

Sources

“5G vs. 4G | Differences in Speed, Latency, and Coverage Explained.” Digital Trends, 30 Apr. 2019, https://www.digitaltrends.com/mobile/5g-vs-4g/.

Autopilot. https://www.tesla.com/autopilot. Accessed 12 Aug. 2019.

Bloomberg - Are You a Robot? https://www.bloomberg.com/tosv2.html?vid=&uuid=679db6d0-bcc1-11e9-98c3-17f8f6f4b57b&url=L25ld3MvYXJ0aWNsZXMvMjAxOS0wMy0xNC9hbWVyaWNhbnMtc3RpbGwtZmVhci1zZWxmLWRyaXZpbmctY2Fycw==. Accessed 12 Aug. 2019.

Insights, MIT Technology Review. “Self-Driving Cars Take the Wheel.” MIT Technology Review, https://www.technologyreview.com/s/612754/self-driving-cars-take-the-wheel/. Accessed 12 Aug. 2019.

“Moral Machine.” Moral Machine, http://moralmachine.mit.edu. Accessed 12 Aug. 2019.

“The Science of Self-Driving Cars.” The Franklin Institute, 1 Aug. 2016, https://www.fi.edu/science-of-selfdriving-cars.

“What Is Lidar and What Is It Used For?” American Geosciences Institute, 9 May 2017, https://www.americangeosciences.org/critical-issues/faq/what-lidar-and-what-it-used.


Our Equal Opportunity Technology program is made possible thanks to Los Altos Community Foundation community grant award. Visit here for more information.

Machine Arts

By Sarah Yung

Machine intelligence is approaching the new frontier - creativity. Machine intelligence has been a game changer in healthcare (among numerous other fields), helping identify cancerous growths, protect patient records, and assist in surgery. Now, teams of researchers from all over the world are exploring other fields for computer intelligence to exercise its new skills. Much of this technology pushes the envelope in artificial intelligence. Human creators like Da Vinci and Tesla set a high bar for any software to match. Researchers are taking fledgling steps into this field, but their results show promise for powerful tools in the future.

Although computers that can create their own work are unprecedented, tools like Grammarly and SpellCheck are already frequently used by writers to make sure they can properly convey their ideas. Additionally, there are promising developments in programs that will protect authors and their work. Emma Identity, for example, is a self-learning technology that can detect authorship by analyzing writing style. Artificial intelligence programs today allow creators to focus on developing their ideas instead of nitpicking over syntax or protecting their work. New developments will hopefully continue to augment the ability of today’s creators in new and significant ways.

More recently, Microsoft is leading the charge into developing creative technology. Currently, its researchers are focusing their attention on poetry. Their recent successes are notable because poetry is an especially challenging form of information synthesis. Poetry involves a certain level of conceptualization and abstraction - instead of direct descriptions, poets make references to people, places, and objects with similarities. The AI system Microsoft XiaoIce has been trained to write poetry from keywords. The system was developed in 2014 and has continued evolving at an increasing rate. In 2017, a Chinese publishing company released Sunshine Misses Windows, XiaoIce’s first-ever poetry collection. This anthology included over 10,000 written poems accomplished in 2,760 hours, of which 139 were selected for publishing. 

From XiaoIce, Microsoft researchers turned their attention to more challenging projects. Another project generates poetic language in response to images. To do so, experimenters fed the machine image poem pairs from a large poetry database. The researchers tested and refined the software’s poetry on over 8000 images, which were then evaluated by both machine algorithms and human readers. Researcher Bei Liu has her own favorite poem created during the study, paired with the image to the right: 

The sun is shining

The wind moves

Naked trees

You dance

Some researchers are focusing their attention to other forms of writing. Ross Goodwin and a team of scientists created a screenwriting software with Long Short-Term Memory. Long Short-Term Memory is an artificial recurrent neural network. A recurrent neural network has a feedback loop connected to past decisions, unlike feedforward networks, which feed information straight through the algorithm once. This computer wrote the short film Sunspring. Sunspring is the story of three people - H, H2, and C - living in a futuristic world, where they are entangled in a love triangle. Sunspring, directed by Oscar Sharp, was presented at Sci-Fi London, where it was selected as one of the 10 best short films. 

Novels written by computers are also making ripples in the literary world. A team from Future University Hakodate in Hokkaido, led by professor Hitoshi Matsubara, developed an AI that entered a short story as a candidate for the Hoshi Prize. The Hoshi Prize is a Japanese science fiction award, one of the only competitions to allow entries from computers. The short story entered by Matsubara’s team  - “A day when a computer writes fiction” - made it past the first round of judging. After that round, the judges decided it did not compare to its human counterparts. Researchers found that, while the computer could emulate Hoshi’s writing style fairly accurately, it could not create good plots. For the competition, humans handled the plot creation, then the AI wrote the story. Although the story passed as human writing, the software still has a ways to improve to match its human counterparts.

Google is also diving into artificial intelligence, founding the Google Brain research team in the early 2010s which took a slightly different path, diving into the field of music. Their Magenta Project is a research project that “explor[es] the role of machine learning as a tool in the creative process.” The Magenta Project utilizes machine learning techniques to develop a gallery of machine-made art and music, which is continually updated to this day. Magenta uses a combination of deep learning algorithms and reinforcement learning algorithms for its creative process.  Deep learning algorithms train off existing data, improving with each cycle of new data, while reinforcement algorithms are trained repeatedly on the same set. After developing their algorithms, they released their models and tools as open-source. The team continues to collaborate with the public to modify and add to Magenta software. They recently worked on developing long-term coherence in music with patterns and themes and introducing more interfaces. for more people to interact with. 

As more work is being done towards developing creative computers, some creators have expressed fear towards the development of such technology. Compared to their counterparts, computers can churn out work at a rate that far surpasses any human. For example, the AI system XiaoIce wrote over 10000 poems in 2760 hours. However, right now, and far into the foreseeable future, computers cannot genuinely feel complex human emotions, only mimic them. Some feel that because of this, instead of replacing authors, AIs should replace literary agents, editors, and publishers. Using repositories of published works, AIs could critique released work based on information in their databases. Even if AI doesn’t go into the field of publishing, the researchers behind these developments don’t intend to replace human authors. 

Far from replacement, these teams intend for AI to augment creative activity. While full-fledged, creatively, uniquely thinking artificial intelligence is far in the future, the capability of today’s technology should not be ignored. In many ways, humans and robots are collaborating to put forth the best material possible. Humans are constantly innovating and changing the rules, and today’s software has a long way to go to match that. Right now, computers provide a medium to communicate concepts and ideas. Soon, they will become an integral part of the creative process. Let us approach this new frontier boldly - who knows what lies beyond!

Sources:

“Artificial Intelligence In Creative Writing : A Curse Or A Blessing For Authors?” The Bookish Elf, 8 July 2018, https://www.bookishelf.com/artificial-intelligence-creative-writing/.

Duncan, Joe. “The Future of Writing in the World of Artificial Intelligence.” Medium, 10 Mar. 2019, https://writingcooperative.com/the-future-of-writing-in-the-world-of-artificial-intelligence-9ca9b6babb9c.

EMMA. Defining Writing Identity. Disrupting Plagiarism.https://emmaidentity.com/. Accessed 17 Aug. 2019.

Lewis, Danny. “An AI-Written Novella Almost Won a Literary Prize.” Smithsonian, https://www.smithsonianmag.com/smart-news/ai-written-novella-almost-won-literary-prize-180958577/. Accessed 11 Aug. 2019.

“Magenta.” Magenta, https://magenta.tensorflow.org/. Accessed 11 Aug. 2019.

Marr, Bernard. “Artificial Intelligence: What’s The Difference Between Deep Learning And Reinforcement Learning?” Forbes, https://www.forbes.com/sites/bernardmarr/2018/10/22/artificial-intelligence-whats-the-difference-between-deep-learning-and-reinforcement-learning/. Accessed 11 Aug. 2019.

Otake, Tomoko. “Japanese Researchers Take Artificial Intelligence toward the Final Frontier: Creativity.” The Japan Times Online, 19 June 2016. Japan Times Online, https://www.japantimes.co.jp/news/2016/06/19/national/science-health/japanese-researchers-take-artificial-intelligence-toward-the-final-frontier-creativity/.

Sunspring. www.imdb.com, http://www.imdb.com/title/tt5794766/. Accessed 11 Aug. 2019.

“The Poet in the Machine: Auto-Generation of Poetry Directly from Images through Multi-Adversarial Training – and a Little Inspiration.” Microsoft Research, 18 Oct. 2018, https://www.microsoft.com/en-us/research/blog/the-poet-in-the-machine-auto-generation-of-poetry-directly-from-images-through-multi-adversarial-training-and-a-little-inspiration/.


Our Equal Opportunity Technology program is made possible thanks to Los Altos Community Foundation community grant award. Visit here for more information.

Thanks to AI in the classroom: the future is now

By Alice Liu

Artificial Intelligence (AI) and machine learning are taking the world by storm with innovations and functionality in everyday life -- all while providing convenience and utilities to the community. AI is seen and used in simple services such as Siri to more complicated ones such as personalized lesson plans, which are commonly used in the classroom. The field of education has seen much improvement and development over the centuries with its new methods and technology. Whether it’s speech recognition or self-driving cars, AI has proved itself to be a useful force of technology for the future. Now, with the ever growing inventions and uses of Artificial Intelligence, teachers and students alike can find even more uses for tech in the classroom.

Student has interactive online learning chemistry and biology course. Image credit: Dreamstime

Student has interactive online learning chemistry and biology course. Image credit: Dreamstime

PERSONALIZED LEARNING THROUGH AI

Teaching a class full of students and making sure that they all understand and retain the information being taught can be tough, especially when their ways of learning differ from each other. However, with AI in the classroom, teachers and students alike can use certain programs for smart content such as digitized textbooks, or intelligent tutoring systems that are catered to a student’s needs. 

DIGITIZED TEXTBOOKS

Millions of students are using different digitized textbook software, namely Pearson, an educational software system that uses students’ data to automatically provide real time feedback like a teacher would. It is one of the many companies transitioning from paper to digital textbooks, making it easier to update new and improved material online and be accessible whenever and wherever. Pearson offers the up-to-date content for a reasonable price, which is something many other companies are doing in an attempt to digitize their paper textbooks and make it easier for students to access them. 

Another popular example of digitized learning is Rosetta Stone, where users can learn different languages with the help of an AI and virtual learning system. It uses image and speech recognition for the best and most effective user experience in learning foreign languages. Its technology identifies the word being spoken and the user’s voice data 100 times per second with native speaker samples and provides real-time assessment. Systems like Rosetta Stone and Pearson are innovative ways of helping people learn through AI-powered systems. Not only can a personalized learning experience be essential for a student’s understanding and success, but it can also provide useful information for teachers about how each student is learning so they can make changes to their curriculum. 

INTELLIGENT TUTORING SYSTEMS

Students are different and unique in their own ways, whether it be their learning style, knowledge of different materials, or even personality. Either way, those differences and needs are usually customized by teachers in a learning environment, but could technology help with that even more? The answer is yes through learning algorithm of intelligent tutoring systems (“ ITS”).

ITS, using AI, can transform teaching to adapt to a student’s weaknesses and help them work on the areas they need the assistance on. In a case conducted by ALEKS (Assessment and LEarning in Knowledge Spaces), an ITS, the pass rate in a math course at Clemson University jumped from 45 to 70 percent after it was introduced to an AI software. Through cognitive and ITS, students can drastically improve their skills in a specific area. For example, if a student is struggling with a problem, cognitive tutoring systems will take data gathered from how the student answered previous questions, apply what they know from that data, and identify which part of the question that is difficult and follow up with exercises to help the student practice that skill. 

HOW AI EMPOWERS AND AUGMENTS TEACHERS’ CAPABILITIES

The main difference between these AI-powered software systems and actual teachers in the classroom is that the former is more accessible through the internet. Despite the increased convenience of smart education systems, they will never be able to replace a good teacher. Instead, researchers are hoping for the AI to augment student learning by performing more menial tasks freeing up the teacher’s time to better motivate and connect with the students. 

On top of that, AI can be used to assist teachers in tasks such as grading and plagiarism checks. One major use of AI is Turnitin, an online plagiarism detector that promotes academic integrity within students and makes it easier for teachers to grade papers. Another system that utilizes AI is Gradescope, a grading software system that helps teachers grade and mark essays more efficiently. 

Needless to say, AI brings so much to the classroom. With other emerging technologies, it is entirely possible that AI may soon be taking over the classroom with new and innovative teaching and learning devices. From personalized courses to digital learning, AI is sure taking a different approach to the more “traditional” way of learning. According to Charles Fadel, the founder of the Center for Curriculum Redesign, “AI is arguably the number one driving technological force of the first half of the century…” AI can be seen improving students’ and teachers’ lives in the classroom by providing access to new information, intelligent tutoring systems, and just overall being a great resource to utilize in the classroom to enrich learning. 

BOTTOM LINE: IS AI HELPING OR HURTING OUR EDUCATION SYSTEM?

AI is becoming increasingly ubiquitous globally and permeating into our lives without our knowledge, according to the RAND Corporation reports, “AI has so far found a perch in three "core challenges" of teaching: intelligent tutoring systems, automated essay scoring and early warning systems to identify struggling students who may be at risk of not graduating.” As much as AI can be used to level the playing field of education, some fear that it may widen the AI divide as AI tools will help advanced students and affluent school districts excel more leaving other students lagging behind due to lack of computer technology and connectivity. As schools are starting to embrace AI in the classroom, students who do not have technology access are at a huge disadvantage. Researchers have long been concerned about the chicken and egg correlation between wealth and education. Ready or not, the AI revolution is here and is likely to exacerbate the education gap. Take actions, whether you are a teacher, student or technologist, everyone should get involved and collaborate to help shape the future of AI learning. Ultimately, It is up to key stakeholders to work towards lessening the digital divide between the haves and have-nots. Only then, will all be free to reap the benefits of AI.

Alice Liu serves as intern at Equal Opportunity Technology (EqOpTech), a nonprofit organization that promotes equal access to technology. EqOpTech strives to enable at-risk students with refurbished computers to leverage the AI education opportunity.

WORKS CITED:

Faggella, Daniel. “Examples of Artificial Intelligence in Education.” Emerj, Emerj, 24 Apr. 2019, emerj.com/ai-sector-overviews/examples-of-artificial-intelligence-in-education/.

Griswold, Alison. “This Cognitive Tutor Software Is Already Having A Revolutionary Effect.” Business Insider, Business Insider, 6 Mar. 2014, www.businessinsider.com/cognitive-models-and-computer-tutors-2014-3.

Johnson, Alyssa. “5 Ways AI Is Changing The Education Industry.” ELearning Industry, 15 Feb. 2019, elearningindustry.com/ai-is-changing-the-education-industry-5-ways.

Loeffler, John. “Personalized Learning: Artificial Intelligence and Education in the Future.” Interesting Engineering, Interesting Engineering, 25 Dec. 2018, interestingengineering.com/personalized-learning-artificial-intelligence-and-education-in-the-future.

“Looking for Something? You're in the Right Place.” Rosetta Stone® - Language-Learning Software with Speech Recognition, www.rosettastone.com/speech-recognition.

Marr, Bernard. “How Is AI Used In Education -- Real World Examples Of Today And A Peek Into The Future.” Forbes, Forbes Magazine, 25 July 2018, www.forbes.com/sites/bernardmarr/2018/07/25/how-is-ai-used-in-education-real-world-examples-of-today-and-a-peek-into-the-future/#17123993586e.

McKenzie, Lindsay. “Inside Higher Ed.” Pearson Goes All in on Digital-First Strategy for Textbooks, 16 July 2019, www.insidehighered.com/digital-learning/article/2019/07/16/pearson-goes-all-digital-first-strategy-textbooks.

Sandle, Tim. “Artificial Intelligence Used to Mark Exam Papers.” Digital Journal: A Global Digital Media Network, 29 May 2018, www.digitaljournal.com/tech-and-science/technology/artificial-intelligence-used-to-mark-exam-papers/article/523361.

Vander Ark, Tom. “The Promise and Implications of Artificial Intelligence in Education.” Getting Smart, 1 Apr. 2019, www.gettingsmart.com/2019/04/smart-review-the-promise-and-implications-of-artificial-intelligence-in-education/.

Zimmerman, Eli. “Educators Tailor Services to Individual Students with AI.” Technology Solutions That Drive Education, 27 June 2018, edtechmagazine.com/higher/article/2018/06/educators-tailor-services-individual-students-ai.

Dian Schaffhauser, “AI in Education Shows Most Promise for the Repetitive and Predictable.” 28 February, 2019, thejournal.com/articles/2019/02/28/ai-in-education-shows-most-promise-for-the-repetitive-and-predictable.


Our Equal Opportunity Technology program is made possible thanks to Los Altos Community Foundation community grant award. Visit here for more information.

The Digital Divide in Education

By Terence Lee, March 26th, 2018                                                                              Printable PDF

Abstract:

Digital Equity = Equal Access to Computers + Free Broadband + Computer Literacy.

This research paper examines the issue of digital divide and provides insights to tackling this multi-faceted challenge facing our nation.

High School volunteers mentoring at-risk students at Sunday Friends in Khan AcademyPhoto courtesy of Terence Lee

High School volunteers mentoring at-risk students at Sunday Friends in Khan Academy

Photo courtesy of Terence Lee

It’s nearing the end of the school day. Several students watch the clock anxiously, waiting for the bell to ring. As the blaring tone echoes across the school, the teacher reminds everyone over the rising cacophony of students hurrying to leave that they all have several online modules to complete before midnight tonight and a practice online test due next week. For most of the students, going online and spending the necessary thirty minutes to an hour on the modules should be no problem (aside from slight annoyance over having homework that day). However, for some of the kids in the class being able to get on the internet and finish their homework from six to seven classes every night is a near-impossible task. I talked with one of these students in depth about their internet access dilemma (for anonymity, I am calling him Matthew). At home, Matthew’s parents live from paycheck to paycheck and cannot afford the cost of buying either a computer or monthly internet coverage. Several options remain for Matthew: waiting for a thirty-minute computer session at a town library several towns over with over forty people in front of him, standing outside a free wi-fi hotspot at a nearby restaurant stalled by every connection drop, or borrowing a computer from a compassionate classmate. In school, he finds himself constantly turning in his assignments late and lags behind his peers in learning and research due to the lack of internet connectivity. Though Matthew’s life goal is to become a biologist, as time passes, this goal has grown increasingly more difficult. When it is time for college applications, Matthew may not have good enough grades to get into a good university in the major he wants. Students like Matthew are put at a disadvantage, lagging behind in grades and career opportunities simply because they lack easy access to technology.

The Digital Revolution

Over the past few decades, the ubiquity of computers and internet access has encompassed the majority of western civilization. At the same time, the shift from paper to digital for everyday tasks over the last few years has transformed internet usage “[from] a luxury [to] a necessity” in the words of former President Obama (Knibbs). Yet despite the exponential growth of technology in the digital era, there exists an economic, educational, and social schism between those who have easy and unregulated access to the internet and those who do not. Dubbed the Digital Divide, this issue has remained largely unaddressed by both the general public and the government. With the futures of many at-risk students at stake, it is critical for society to gain a deeper understanding of the impact the Digital Divide has on education and formulate ways to combat digital inequity.

Technology Integration In Education - 21st Century Learning

The increasing integration of technology into education creates several challenges for at-risk students. Today, “seven in 10 teachers now [assign] homework that requires web access” (Kang). Incorporating technology heavily into an educational curriculum is becoming more of an educational standard to help prepare students for the real world. In fact, several schools have elected to “write our own digital textbooks” to be accessed by remotely connecting to the school’s network through the internet (Bendici). With the majority of homework being assigned through the cloud, some teachers expect all their students to have internet access. Students who are unable to meet their deadlines due to lack of tech not only lag behind in learning, but also suffer poor grades (Kang). These obstacles have forced low-income students to find time-consuming workarounds just to finish their daily assignments. For many low-income students, waiting in long lines to use a library computer or sitting on a public bus for hours using their phone, are the only choices they have, despite its impracticality.

Not All Schools are Technologically-Engaged Equally

Some believes that the Digital Divide in education can be addressed through action from local school districts and the surrounding community. However, for some school districts, it can be a successful initiative but for others, this can be a daunting task. For districts like the Mountain View-Los Altos District in the Silicon Valley, districts have pledged hundreds of thousands of dollars as well as partnered with major tech corporations such as Google to provide every student a Chromebook for use at home and during class as part of a new computer-based learning curriculum (“MVLA rolls out laptop integration” Newell). Other school districts such as South Fayette near Pittsburgh is actively working with nearby university Carnegie Mellon “to help develop its new computer science curriculum and train its teachers [and provide its students] access to some of the best minds in the region” (Herold). Though these programs are successful in promoting digital equity and preparing students for the future, they tend to be geographically focused and are only available to more affluent school districts or in close proximity to technological innovation.

Looking at how the Digital Divide affects modern-day schooling, two main problems exist for underfunded school districts across the US: “[a] lack of resources and problems in the community they serve” (Herold). For most of these schools, providing financial or ideological support to promote and develop tech-based learning (TBL) can be near impossible. Despite the federal E-Rate Program helping to “[provide] broadband to libraries and schools,” it remains a constant struggle for impoverished schools to avoid running out of money (Vick; Herold). The CEO of Innovative Educator Consulting Network (who help schools integrate technology into their curriculum) brings up the difficulty for schools in low-income areas to “prioritize and fix what’s most important” when everything is in a constant state of disrepair (Harm qtd. in Herold).

In the case of the Sto-Rox school district in Pittsburgh, ranked the 102nd best district out of 103 districts in the Allegheny County of Pennsylvania, now sends “20% of [the school’s] annual budget to charter schools” where many students in the Sto-Rox district have fled in search of better options (Herold). In the classroom, the Sto-Rox district struggles to get even a small portion of its students online during class; with 30-60 Chromebooks split among 1,300 students which “sat unused for more than a year…[because] the district didn’t have consistent funding” and faulty adapters for the dozens of interactive whiteboards that are too expensive to replace (Herold).

Though most school districts recognize the lasting effects of digital inequity, over 70 percent have not taken subsequent action often because they lack a “clear vision...about what learning should look like and why” as observed by the CEO of CoSN (a nonprofit comprised of K-12 technology leaders) Keith Krueger (Krueger qtd. in Bendici; Herold). This lack of clarity and focus on integrating TBL for at-risk students creates what has been dubbed an educational “vicious cycle” in which a lack of TBL engagement causes a lack of interest and vice versa. Technology commitment in education is key, a “lack of engagement...when educators do not practice inclusive strategies in their teaching,” and students feeling it “is not part of their self identity” creates further hurdles that perpetuate the divide (Subramony qtd. in Rogers; Rogers; Subramony qtd. in Rogers).

Even for schools who integrate technology into their curriculum, the teaching styles and levels of student engagement differs. For example, more-affluent schools have connected classroom learning to real-life problem solving by blending technology into project-based learning (PBL). Under this nontraditional PBL approach, students are coached to learn and leverage technology tools, from online research, collaboration using google hangouts or google docs to shooting video, iMovies for TED talks to problem-solve and present solutions. In contrast, “students from low socioeconomic backgrounds use computers in school differently from more affluent students” (Jornell). A recent study of schools comparing high and low socioeconomic areas in California found that “students in poorer schools use computers to make presentations of existing material while wealthier schools encourage students to research, edit papers, and perform statistical analyses” (Warschauer qtd. in Jornell).

Technology integration and engagement issues are commonalities for underfunded districts; yet at the same time, there are other evolving alternatives. Several impoverished school districts are implementing these new options to address both digital inequity and annual funding issues. These schools have elected to forego the costs of purchasing and maintaining hardcover textbooks for its tens-of-thousands of students switching to a newly developed digital curriculum (Bendici). Aligned with state standards and updated annually, the digital content allows for schools to purchase devices at a one-to-one ratio enabling all their students to access the internet at home.

Digital Equity = Equal Access to Computers + Free Broadband + Computer Literacy

In order to efficiently take action towards narrowing the Digital Divide, it is important to recognize that digital equality is not limited to equal access to computer technology and internet connectivity, but also computer literacy. Despite the more limited options available in poverty-stricken areas, there are many small actions that help provide relief to students in need. Through utilizing resources such as CoSN’s Digital Equity Action Toolkit, school districts can analyze student’s limitations in accessing the web to ultimately implement “low-cost, simple efforts to assist low-income families” (Bendici). Such actions could include distributing maps marking the locations of free Wi-Fi areas to students, coordinating with local corporations to set up free hotspots, or even municipal networks to reduce the overall cost of broadband coverage in an area though it should be noted however that in 20 states, cable companies have lobbied lawmakers to outright ban municipal networks (Vick).

Cathy Cox of the Academic Senate for California Community Colleges states “there are many reasons students lack the necessary computer literacy skills. One simple fact is that many students may not have access to computers in their homes” (Cox). In an effort to address this, Sunday Friends, a nonprofit in the Silicon Valley, is dedicated to helping low-income families with the Digital Divide by providing computer literacy classes and an opportunity to earn a computer. Several times a month, the nonprofit hosts STEM-related activities and computer education for students and families. Sunday Friends’ “Computer Education For Families” program stresses the value of computer literacy and advocates computer learning by both parents and children. Through this program, parents learn the necessary computer skills to help their children with homework, communicate with teachers via e-mail, and access school news online. The program also teaches basic, intermediate, and advanced computer and math skills classes, and awards students their own laptop upon completion of the nonprofit’s STEM curriculum. According to the nonprofit, they “ [recognize] that children who have positive experiences with STEM are more likely to apply themselves to learning STEM in school, which may lead to successful careers that build on STEM” (“LAHS Freshman seeks tech donations” Sunday Friends qtd. in Newell). Despite the numerous families that Sunday Friends have assisted with computer access and necessary computer literacy classes, the organization is unable to address the cost of internet access which remains too expensive for many at-risk families as many lack “steady jobs and are barely paying their rents” (Talati).

Broadband Expansion To Rural America

Compounding on this issue, in rural areas that lack pre-existing infrastructure “large internet service providers...struggle to make a return on their investment...given the lack of customer base” and the difficult nature of installing broadband and fiber-optic cables according to Gladys Palpallatoc of the California Emerging Technology Fund (Palpallatoc qtd. in Huval). Rob Blick, a computer programmer located in the Conotton Valley of Ohio “can understand why cable companies don’t want to...wire his neck of the woods” comparing broadband coverage to a “modern-day equivalent of the interstate highway system” (Blick qtd. in Vick). The lack of broadband access in less urban areas has led tech experts to adopt the mentality that internet access should be “like access to public roads. Today anyone who can walk, drive, or take a bus can [get to where they need to be] for free. For some it’s easier and for some it might be harder - but it’s available” (Talati). To address the issue of affordable internet connectivity, several small Internet Service Providers (ISP) including Cox Communications have offered “high-speed internet access for $9.95 per month to [students]...on free or reduced-price lunch” after negotiation with local school districts (Bendici). In cities such as Mountain View, presiding tech corporations have given $800,000 to expand free wireless networks for public use (Noack). Albeit the Wi-Fi capability being slow and unreliable, this is a step in the right direction for digital equity. Through the promotion of cheaper internet options and free alternatives, at-risk students can be provided the tools necessary to keep up with the rest of their classmates as well as the world around them.

More recently, several tech startups have started proposing solutions towards achieving internet coverage in low-income areas. One of these startups, SoftBank, plans to implement an industrial blimp that “will plug into the backbone of the internet, and then will be able to project a wireless network to customers at a range as big as 10,000 square kilometers” (Rogers) providing a stable and fast internet connection to anyone in range. On the other hand, the prospect of major tech corporations providing any sort of technological aid towards the Digital Divide has stagnated. Instead, companies like Google and Apple are electing to send “their philanthropy abroad” to countries like India because “they think it’s their new market” where they can drastically increase the number of people connected to the internet ultimately to sell and advertise their services (Palpallatoc qtd. in Huval).

Can The Government Close The Digital Divide?

While the community and school districts search for ways to engage students in technology, some believe that a precedent/legislation set by the US government could provide great momentum towards digital equity. Over the last few years however, digital inequity has transformed into a partisan topic resulting in a political stalemate. To better understand the standstill, it is important to understand the history of federal aid programs for technology and internet access. One of the first major broadband programs, the ConnectHome program, was enacted by then-president Obama in 2015 to address the Digital Divide. This program partnered with Google to provide “free home internet access... in its twelve Google Fiber markets…[serving] 275,000 low-income homes in 27 cities” (Knibbs). At the time though, Democrats and Republicans were divided on the right way to provide aid with Democrats supporting federal grants and loans, while Republicans were reluctant to authorize such large amounts of cash that would “prop up new companies to compete with existing internet providers” (Romm). Other programs such as the California Advanced Services Fund (CASF), Lifeline, E-Rate, and most recently the Internet for All Act have all found varying degrees of success. In the case of the CASF, the program gave broadband providers the ability to receive 300 million dollars of grant money to incentivize them into building fiber optic cables in impoverished areas where providers would otherwise get low-return rates through (Ulloa). In 2016, several lawmakers reintroduced the Internet for All Now Act to allocate more funds into the CASF, ultimately facing heavy criticism because it imposed “a burden on consumers and [was] poorly managed...with some money being used to build connections in remote - but not necessarily needy - areas” (Ulloa). In the beginning of 2018, President Trump announced his plans to allocate 200 billion dollars in federal funds to upgrade utilities such as roads, bridges, and broadband networks. The proposed idea quickly sparked disagreement among Democrats and Republicans with the former believing that the latter’s plans “[to] make it easier for [broadband providers] to...install small boxes that can beam speedy wireless service…[does] not solve any of our country’s most pressing broadband infrastructure problems” (Romm). Instead, Democrats believe that a large influx of federal funds is the most surefire way to achieve fast internet connection across the nation while Republicans are unwilling to allocate the necessary funds under the argument that “mobile internet could act as a viable substitute for home broadband” (“Redefining 'Broadband'” O’Rielly qtd. in Finley). With 34% of those without easy access to the internet acknowledging a subsequent “[disadvantage] in developing new career skills or taking [school] classes,” it is vital for the right and left to agree on a resolution which will bring forth major change for the tech divide (Lee).

Recent Federal Communications Commission Actions Could Widen the Digital Gap

Despite the dire importance of legislation and programs supporting digital equity, there exists a new threat for such aid in the form of the Federal Communication Commission’s new chairman Ajit Pai. Since his appointment in 2017 by President Trump, Chairman Pai has “vowed to close the divide ‘between those who can use cutting-edge communications services and those who do not,’” yet has taken a rather roundabout path towards addressing such inequities (Vick). Pai has been incredibly skeptical of government programs such as the Lifeline and E-Rate Program choosing to oppose proposed expansions of said programs. The chairman and the Republican majority in the FCC has planned to implement changes to the Lifeline program that cut down on subsidies offered by the program along with the amount of people its available to and the number of carriers covered under the grounds that there exists “widespread abuse” of the program (“The FCC's Latest Moves” Finley). Additionally, these changes also “allow telecom companies to decommission aging DSL connections...without replacing them” which sparked concern in rural areas due to a dearth of high-speed cable internet (“The FCC's Latest Moves” Finley). In the near future, it is expected that the Republican-led FCC will vote to lower the standard of broadband coverage which according to Roberto Gallardo, a researcher at Purdue’s Center for Regional Development, “[could] reduce the motivation of broadband providers to expand service into rural communities, which already lag behind urban areas in both speed and availability of high speed internet” (“Redefining ‘Broadband’” Gallardo qtd. in Finley). As an alternative to such programs, the chairman maintains his beliefs that major broadband providers, who for the most part have set up little to no infrastructure in impoverished neighborhoods, will provide fast internet speeds to everyone (Vick). In response to the aforementioned concerns and issues, Pai has brushed them off as fear-mongering meant to belittle the capabilities of major broadband providers (Belvedere).

Deregulating Net Neutrality Worsens the Digital Divide

In addition to the government legislation and programs that can be enacted, the methods and strength in which the Federal Communications Commision regulates ISP play a vital role in the Digital Divide. Mr. Pai has long been a critic for Net Neutrality asserting that the FCC’s reclassification of it as a public utility was an attempt to “replace internet ‘freedom with government control’” (Pai qtd. in Meyer). Originally enacted in 2015 under the Obama administration, Net Neutrality is the belief that broadband providers should treat all data as equal regardless of its origin. The University of Maryland reports that “83 percent of Americans do not approve of the FCC proposal [to repeal Net Neutrality]...including 3 out of 4 Republicans” (“This poll gave Americans” Fung). Chairman Pai successfully repealed Net Neutrality in late 2017 allowing for a more “light touch regulation” (Pai qtd. in Low).

With Net Neutrality repealed, Pai is hopeful that in the future, “[the FCC’s] general regulatory approach will be a more sober one that is guided by evidence, sound economic analysis, and a good dose of humility” (Pai. qtd. in Meyer). Directly contradicting Pai’s “[vow] to close the Digital Divide],” the unbridled power that ISP have over their consumers will only serve to increase digital inequity through pay barriers for reliable and usable internet access that low-income families cannot hope to afford (Vick). The issue with the ISP newfound ability to throttle data is that it can easily be exploited for profit; by creating slow and fast lanes for internet speed, ISP can charge an  exorbitant premium for fast and reliable wifi while at the same time throttling those in the slow lane to a grinding halt as incentive to pay for a more expensive package. Vijay Talati, VP of Engineering at Juniper Networks and Board Secretary of Sunday Friends, views the recent repeal of Net Neutrality as “a step in the wrong direction… [diminishing] the hopes for free internet access” while furthering the Digital Divide between socioeconomic classes (Talati). For communities filled with at-risk students, even if ISP wanted to build infrastructure in their area and offer coverage, the repeal of net neutrality ultimately gives broadband providers complete power over the price of internet coverage (LeMoult).

Digital Equity is not One Man’s Task

The battle for digital inequity in education is far from over. Recent advances in technology are considered a double-edged sword in that it both helps and hurts the divide. While it is widely believed that the technology boom worsens the Digital Divide leaving low-income and rural areas behind, conversely, tech innovation can also level the playing field in technology access and affordability. The quest to solve digital inequity has so far been focused on technology access, but that is only half of the equation. The missing variable is technology engagement, interest and adoption. The availability of collaborative open source software such as Linux and E-Learning platforms like Khan Academy can enable free, modern technology and education for all. Given the lack of tech interest and engagements in certain geography and diverse cognitive behaviors of students with low socioeconomic background, I also see an opportunity to develop an online teaching system that optimizes learning by teaching students in an engaging, interactive, and adaptive way so no child is left behind. Using Deep Learning, a branch of artificial intelligence and techniques such as reinforcement learning, the system could utilize predictive data analysis to determine the best method of presenting complex ideas. While artificial intelligence is at its infancy, it has the potential of closing the digital gap and enriching students engagement on a global scale. Consequently, digital equality is not one man’s task alone, its success hinges on teamwork and collaboration of school districts, communities, technologists, communities, states, and the federal government. Together, each and every constituents can explore possible solutions, whether it is individuals lobbying Congress to change the law or tech philanthropists inventing the next breakthrough in personalized learning. It is the combined efforts of many that has the potential to bring forth digital equity, one student at a time.

Works Cited

Belvedere, Matthew J. “Trump's FCC Chair: We Didn't Break the Internet and We're Going to Make It Better.” CNBC, CNBC, 27 Feb. 2018.

Bendici, Ray. "BRIDGING the Digital Divide: OER, New Software and Business Partnerships Can Connect All Students with Educational Technology." District Administration, vol. 53, no. 10, Oct. 2017, p. 54. EBSCOhost.

Chin, Sharon. “Teenager's Project Refurbishes Laptops For Students In Need.” CBS San Francisco, 10 Aug. 2017.

Cox, Cathy. “The Digital Divide: Information Competency, Computer Literacy, and Community College Proficiencies.” The Digital Divide: Information Competency, Computer Literacy, and Community College Proficiencies | ASCCC, Academic Senate for California Community Colleges, Mar. 2009.

Finley, Klint. “Redefining 'Broadband' Could Slow Rollout in Rural Areas.” Wired, Conde Nast, 30 Aug. 2017.

Finley, Klint. “The FCC's Latest Moves Could Worsen the Digital Divide.” Wired, Conde Nast, 17 Nov. 2017.

Fung, Brian. “FCC Plan Would Give Internet Providers Power to Choose the Sites Customers See and Use.” The Washington Post, WP Company, 21 Nov. 2017.

Fung, Brian. “This Poll Gave Americans a Detailed Case for and against the FCC's Net Neutrality Plan. The Reaction among Republicans Was Striking.” The Washington Post, WP Company, 12 Dec. 2017.

Herold, Benjamin. "Poor Students Face Digital Divide in How Teachers Learn to Use Tech." Education Digest, vol. 83, no. 3, Nov. 2017, p. 16. EBSCOhost.

Huval, Rebecca. “The Digital Divide in Silicon Valley's Backyard.” The Daily Dot, 14 Aug. 2016.

Journell, Wayne. “The Inequities of the Digital Divide: Is e-Learning a Solution?” E-Learning, University of Illinois at Urbana-Champagne, vol. 4, no. 2, 2007, pp. 138–149.

Kang, Cecilia. “Bridging a Digital Divide That Leaves Schoolchildren Behind.” The New York Times, The New York Times, 22 Feb. 2016.

Kastrenakes, Jacob. “FCC Will Block States from Passing Their Own Net Neutrality Laws.” The Verge, The Verge, 22 Nov. 2017.

Kaye, Leon. “Renewables Can Narrow the Global Digital Divide.” Triple Pundit: People, Planet, Profit, 27 Feb. 2018.

Knibbs, Kate. “Obama Has a Plan to End America's Internet Access Inequality Problem.” Gizmodo, Gizmodo.com, 15 July 2015.

Lee, Seung. “California's Digital Divide Closing but New 'under-Connected' Class Emerges.” The Mercury News, The Mercury News, 27 June 2017.

LeMoult, Craig. “If Net Neutrality Is Repealed, What Will It Mean For People Who Don't Have Broadband Yet?” WGBH News, 11 Dec. 2017.

Loizos, Connie. “Steve Jurvetson on Why the Digital Divide Needs to Be Addressed Now.” TechCrunch, TechCrunch, 17 Aug. 2017.

Low, Cherlynn. “What You Need to Know about Net Neutrality (before It Gets Taken Away).” Engadget, 1 Dec. 2017.

Meyer, David. “How FCC Chair Ajit Pai Took His Fight Against Net Neutrality to the Finish Line.” Fortune, 14 Dec. 2017.

Noack, Mark. “Google Gives $800,000 for Downtown WiFi.” Mountain View Online, 4 Jan. 2017.

Newell, Traci. “LAHS Freshman Seeks Tech Donations.” Los Altos Town Crier, 12 Aug. 2015.

Newell, Traci. “MVLA Rolls out Laptop Integration This Fall.” Los Altos Town Crier, 23 July 2014.

Quaintance, Zack. “The Quest for Digital Equity.” Government Technology: State & Local Government News Articles, Mar. 2018.

Rogers, Kaleigh. “Startup Thinks Its Tethered, Internet-Beaming Blimps Can Bridge the Digital Divide.” Motherboard, 20 Feb. 2018.

Rogers, Sylvia. "Bridging the 21st Century Digital Divide." Techtrends: Linking Research & Practice to Improve Learning, vol. 60, no. 3, May 2016, pp. 197-199. EBSCOhost.

Romm, Tony. “Washington's next Big Tech Battle: Closing the Country's Digital Divide.” Recode, Recode, 17 Jan. 2018.

Talati, Vijay. “The Educational Digital Divide in a Nonprofit Context.” 11 Feb. 2018.

Ulloa, Jazmine. “California Wanted to Bridge the Digital Divide but Left Rural Areas behind. Now That's about to Change.” Los Angeles Times, Los Angeles Times, 18 Jan. 2018.

Vick, Karl. "Internet for All." Time, vol. 189, no. 13, 10 Apr. 2017, p. 34. EBSCOhost.