This is the New Age
It started with coming to grips with the fact that I can be one of the best.
One of the brightest minds of my generation… from the Entire World… like among people from countries spread all over Earth.
It was fascinating.
Being a semi-finalist and reaching the international stage of the Intel science fair was incredible.
When I got there – that’s when I finally realized how huge this competition was and how many teams from the whole world were in it.
And I was one of the select few who got to be there.
This. This moment is what changed things for me.
Skills, skills, skills – that’s how one of GangStarr’s songs begins.
Web 2.0 brought tons of opportunities for young people (like I used to be :P) to grow and fully develop, like the Universal Man of the older ages.
Developing skills in business, investments, product management, gamification, tech, music, comic book creation, fiction writing, content creation, SEO, social media, networking, biz dev, pitching and presentations, live events, keynotes, public speaking … and well… a ton more.
All of these developments were made possible, thanks to Web 2.0.
Going from 3D animations, to game design, launching my own games, DJ-ing, creating music, doing cool Photoshop artwork, creating educational software —- forums like GFXattack and DeviantArt used to help a lot with that.
It was pretty cool, because you got to be around many other creators. Started with this before High School and been building tons of skills in multiple disciplines ever since.
I will talk more about this. My speeches at various tech events always help widen perspectives. I hope that my blog posts on Renaissance 2.0 will also help.
Life is a lot more fulfilling when you don’t specialize.
When you takle everything that seems interesting — that’s when you will find yourself.
You won’t be great at everything — I’m not great at everything — but I know enough of everything to hire the best talent and to help my teammates grow.
There’s a lot more I have to say.
Everybody wants to specialize, but that’s just plain wrong. Yes, I’ve got a lot more to say.
One more Digital Personal Assistant seems to wanna join the rest of our Assistants that we embed in the SaaS we build.
Learning Assistant will soon be part of Education Cloud PLUS and it will help you find the exact courses you need to learn in order to enhance the skills you say you want to develop.
The purpose here is not to create another sophisticated AI like we have for Ranking Vision AI or the AI system in Squirrly Social.
The reason we’re building and giving this new Assistant is to help you easily assess your skills and easily find what you need to learn right now in order to improve those skills.
If users manage to get a certain low score – then take the quiz – get the recommended courses – then take the quiz again AND obtain a much higher score, we will consider that our Learning Assistant did its job perfectly.
This recommendation engine is not built for entertainment, and the instructors will know exactly which scores require which courses to be studies in order to improve those scores.
We build our Digital Assistants because it is the Squirrly Company’s core belief that technology has the purpose of making our lives easier.
The purpose of technology as a whole is to help the human user achieve A LOT MORE in A LOT LESS TIME.
This is very opposed to apps who try to suck you in and consume all your time. (and potentially leave you with mental health issues or various dissorders).
The ways in which our technology helps our user vary in many regards.
Some things that we want to help the user with (or to help the user achieve) can be with very sophisticated sets of algorithms, machine learning, big data, complex servers and worker systems.
The purpose of our technology is to “assist” our user and help them achieve more in less time. It is NOT to make the Squirrly Company look smart, or cool, or to impress the press and other parties.
We will not over-complicate things, and based on our 20%-80% Core Value (Pursue the 20% that Brings You the 80%), we will do our best to help our user with the simplest model possible which achieves the desired results for our user.
I am presenting our Assistants, because I believe that every tech entrepreneur should RE-THINK the ways in which they are building the whole Customer Experience, not just UI and UX.
Great Digital Assistant will even replace the need for
- training materials
- customer success teams
We have Assistants that replace the need for doing tons of calculations on your own, or doing research and finding sources on the web. We even have assistants that use some of our features on your behalf, while you sleep.
Technology and pieces of tech can be made to do a lot more for users.
Sometimes, people will build features for users and that is nice. But there could be small bits of the entire SaaS that actually use those features in CLEVER ways ON BEHALF of the customer (the user).
Make it easier for users.
Use whatever the tech has access to and do more for the user.
Sometimes assistants are just collections of data that would take months or years to master and they are given back to users, just so users can “plug and play” without needing to become masters themselves.
The Learning Assistant.
A user could very well take a quiz. And see a score of 2 out of 10.
Then they could browse a collection of 40 courses on Education Cloud PLUS to find out which one of the courses would help them (AT THEIR LEVEL) achieve a higher score and would help them build up their mastery.
Well…. maybe they would find a course that would seem good, but given that they scored 2 out of 10, it could mean that the course is way to advanced for the user AT THE MOMENT.
And they should start with a lighter reading first. With something for beginners.
It would take a user a while to navigate all this and figure out all this.
Out of 10 users, maybe 8 of them would just simply quit the whole thing, because it’s too tedious and doesn’t seem fruitful at all.
So why not help all 10 of them?
- It’s very easy for us.
We can’t stay and not do anything for them, just because it’s not usually done.
Our tech can offer them the assistance they need. Our tech knows they scored 2 out of 10. It can “figure out” that the user is a beginner in regard to the SKILL that our quiz was testing.
Therefore, our tech can “deduce” it needs to offer one of the Beginner-Level courses that we offer WHICH helps them build up THAT particular skill.
The way in which we implement this is not important at all.
What the tech can do for the user and what it needs to do for the user is a lot more important than HOW can the tech accomplish this?
Does it (meaning: what we have in mind as a potential solution right now) solve the user’s problem?
Does it help the user do A LOT MORE in a LOT LESS time?
— yes, it does. The Learning Assistant helps the user easily identify the next course to take that will build up their skills, mastery and quiz grade super fast.
Do you know how many human assistants work in stores all over the world just to assist customers in choosing what book to read, or what gift they should buy?
- our Learning Assistant will do something very similar to those.
Our Digital Assistants don’t need to be high tech. They need to be great assistants and help the user.
This is a very low tech example of a digital assistant that replaces the need for having a human assistant.
it was pretty great to finally get feedback from someone other than my brilliant co_writer on the Lore Novel i’m writing.
there were indeed many changes requested to help the reader better sink their teeth into the whole story.
but I’ve come up with some pretty welcome changes to it all.
chapter 1 reads much better now and is a lot clearer. plus, I’ve managed to better profile some of the human characters in it.
for Chapter 3, I really like the fantastical magical beings from Romanian and Transylvanian folklore and how they are now represented inside the book.
the current chapter 3 will probably become chpaters 3 and 5, because there’s a lot of content inside at the moment, and it would also make more sense strictly from a story telling point of view.
- asooo made
- minbyy made 40 000
- pendy made 4 000
- iffy made 18 750
I am pleased to announce that on LAUNCH DATE GOES HERE we’ll be releasing our new e-book, “TITLE GOES HERE.” This e-mail is to let you be the first to know about this e-book and to extend to you a special pre-order offer!
The e-book is… DESCRIBE WHAT THE E-BOOK IS ABOUT IN THIS PARAGRAPH. The e-book is NUMBER OF PAGES GOES HERE pages in length and is perfect for anyone in the YOUR INDUSTRY HERE industry.
Check out the e-book and pre-order with 10% off here:
INSERT URL HERE
By purchasing this e-book, you’ll learn how to:
• Statement #1
• Statement #2
• Statement #3
• And so much more!
We’re offering a special, pre-order price of just PRICE HERE until END OF PRE-ORDER PERIOD so if you grab your copy now, you’ll be able to save on the cost of the e-book!
Register with the special pre-order pricing today:
INSERT URL HERE
If you have questions, please feel free to press ‘reply’ and ask!
YOUR WEB ADDRESS
Just over one year ago, corporate AI ethics became a regular headline issue for the first time.
In December 2020, Google had fired Timnit Gebru—one of its top AI ethics researchers—and in February 2021, it would terminate her ethics team co-lead, Margaret Mitchell. Though Google disputes their version of events, the terminations helped push some of the field’s formerly niche debates to the forefront of the tech world.
Big picture: Every algorithm, whether it’s dictating the contents of a social media feed or deciding if someone can secure a loan, could have real-world impacts and the potential to harm as much as it might help.
- Policymakers, tech companies, and researchers are all grappling with how best to address that fact, which has become impossible to ignore.
- And that is, in a nutshell, the field of AI ethics.
To get a sense of how the field will evolve this year, we checked in with seven AI ethics leaders about the opportunities and challenges facing the field this year.
The question we posed: “What’s the single biggest advancement you foresee in the AI ethics field this year? Conversely, what’s the most significant challenge?”
Click here to read the full piece—we’ve included one answer below.
Deborah Raji, fellow at Mozilla:
I think for a long time, policymakers have sort of relied on narratives from corporations, research papers, and the media, and projected an image—a very idealistic image—of how well AI systems are working. But as they make their way into real-world systems and get deployed, we’re increasingly aware of the fact that these systems fail in really significant ways, and that those failures can actually result in a lot of real harm to those that are impacted.
Specifically, there’s been a lot of discussion on accountability for moderation systems, but we’re going to hear a conversation about the need for auditing and accountability more broadly. And specifically auditing from independent third-party actors—not just regulators, consultants, and internal teams, but actually getting some level of external scrutiny to assess the systems and challenge the narratives being told by the companies building them.
In terms of the actual obstacles to seeing that happen, I think that there’s a lot of incongruencies in terms of how auditors and algorithmic auditors currently work.
It’s all these different actors that want to hold these systems accountable, but are currently working in isolation from each other and not very well coordinated. You have internal auditors within companies, consultancies, [and] startups that are coming up with tools. Journalists, law firms, civil society—there’s just so many different institutions and stakeholders that identify as algorithmic auditors that I think there will need to be a lot more cohesion.