Tagged: medium Toggle Comment Threads | Keyboard Shortcuts

  • VA2SFX 9:34 pm on June 21, 2017 Permalink | Reply
    Tags: export, import, medium, wordpress   

    Going to try to upload my Medium export into this WordPress.com site.

     
  • VA2SFX 2:52 pm on February 10, 2017 Permalink | Reply
    Tags: medium, Product, Profile, Search, Suggestions   

    Search Yourself 

    Possible Medium Search Improvements

    I like that I can search specifically my publication on Medium:


    I wish I could search stories I’ve written on my profile page…


    WordPress is awesome for this. (But has many drawbacks as well)

    The reason quick access to my old articles is important is for linking back to them and expanding on the world I build with my writing. So readers can go “down the rabbit hole.”

    Medium’s current configuration makes it hard for me to go back through my more than a thousand pieces of content:


    Without a user-level search, I have to scroll back through ALL my old stories to find a piece I want to link to. If it was written months or years ago, this becomes even more problematic.

    I need to be able to Search Myself.

     
  • VA2SFX 5:06 pm on January 29, 2017 Permalink | Reply
    Tags: , Augmented, Browser, , medium   

    Reading the future 

    Augmented browsing and inter-textual analysis:

    1. Accurate content summary tools (reduce consumption volume without reducing depth or accuracy of information inputs)
    2. Keyword extraction, filtration, manipulation (alert to, modify and act on content)
    3. Smart messaging and response transaction sublayer (universal event monitoring and triggering protocol)
    4. Analytic browsing layers with chained investigations (dive into semiotically-linked clusters from any node to enlarge or zoom in to context or subtext)
    5. Content divorced from presentational schema (can be consumed in virtually any chosen format)
     
  • VA2SFX 3:07 pm on January 8, 2017 Permalink | Reply
    Tags: , medium, , ,   

    What I learned posting news summaries to Medium for 4 months 


    Over the past four months, Invironment has experimented heavily with re-posting environmental news summaries — sometimes with commentary, sometimes without — to Medium. I’ve seen a lot of positive results.

    Here are some examples from our news section:

    View collection at Medium.com


    Tracking trends

    A major part of the utility of doing this has been to closely track trends in environmental news events as they unfold around the world. I’ve never been so aware, for example, of the struggle for communities around the world to retain control over clean drinking water. Or the increasing number of major cities which are being crippled routinely by smog. Or the business leaders who are trying to make a difference and/or make a buck off Climate Change.

    • Benefit: I now have a clear, referenceable index of environmental news stories from around the world at my finger tips

    Common format

    After some experimentation, we settled on a fairly basic common format for news re-posts on Medium. Looks something like this:


    1. Start with a screenshot from Google Maps to provide geographic context on where the story takes place.
    2. Follow with a title, which begins by naming the news source and either repeats that source’s original article title or modifies it to fit our space needs and/or meta-narratives.
    3. Next comes an optional line or two of commentary (not pictured above).
    4. Then an embed of the original article source.
    5. Then the main (short) quotation from the source explaining the gist of the story.
    6. Follow with a section break, and additional commentary if needed.
    7. Follow that with any related news items we’ve indexed, or quotes and links to further information.

    Some general guidelines


    Rapid turn-around

    Compared to writing exclusively original content all the time to keep our all-volunteer/zero-profit publication going, clearly identified news re-posts are quick to produce (minutes), can provide excellent context for any original published works, and if done well can actually serve the reader.

    There is a lot of value in quality curation in this day and age.


    Observed problems

    I’ve noticed that occasionally readers don’t grasp that this formatting:

    quote format

    Means that something is excerpted text from another source. Enthusiastic commenters will try to take me to task over something ultimately not written by me.

    I always encourage them to track down the original source, ask questions, do their homework and share their results. That said, I’ve also switched to actively using quotation marks in these excerpts, like so:

    “quote format”

    I think that makes it just that much clearer for the reader.


    Stats boost

    Sadly, my publication stats appear to be “taking the day off,” so I can’t share them here. I will however go through a quick list of recent items and their recommendation numbers to give a framework:

    I could go on and on with examples like that, but that represents a fairly accurate distribution of recommendations on pieces.


    People will 💚 good news

    My hypothesis for many of these stories is that people see a positive news headline in their feed, and they simply recommend it outright — either without reading it or with just a casual scan of the summary. [Write for scanners.]


    Increased followers

    Since starting this experiment in September of 2016, Invironment’s followers have gone up by nearly 2,000. (as of Jan. 2017) Obviously, I can’t attribute that entire bump in numbers to simply posting news stories, but there seems to be an undeniable link, if we consider that for every 💚 readers give our posts, our stories get more widely distributed in Medium’s network.


    Twitter re-posts

    I’ve given up on Twitter basically, so I never bother to re-post links to our stories there. But I’ve noticed, thanks to the news pieces that ordinary readers do that job for me.

    Here’s an example section of my Twitter notifications page:


    Now, I haven’t gone through and vetted all those accounts that have re-posted a link card to my stories. But even if these are all bots simply re-posting anything that is a keyword match, the fact is they are driving traffic and attention to our publication. So in addition to more recommendations, an increase in followers in-network, we’re also seeing significant bumps in visibility outside-of-network.


    Takeaway

    Since little green hearts 💚, follows and traffic are the currency of Medium, and the vast majority of these pieces were produced in under five minutes, I would argue that this is a pretty decent return on our time investment.


    What’s your opinion?

    Is this simply more low-brow “growth-hacking” that cheapens the Medium experience? A legitimate way to track and share important news items with a targeted/self-selected readership — or something else?

    I encourage you, as ever, to do your own experiments, to be above-the-board about it, and to share you results with the community. Thanks for listening!

     
  • VA2SFX 10:06 pm on January 6, 2017 Permalink | Reply
    Tags: , medium, , ,   

    Write for scanners 

    Do you scan while reading the internet?

    Of course you do.

    View story at Medium.com

    It’s impossible not to.

    There’s too much information being “fed” to you constantly to all over the internet everyday to ever possible read all of it…

    And you wouldn’t want to read all of it anyway, because most of it is shit 💩 input.

    In fact, my brain is numb from reading pointless things on the internet everyday (like perhaps, ironically, this very article to some) — and yes I do already blame my self, thank you.


    So why fight it?

    This is kind of evolving as I experiment with it, but the premise I’ve developed is:

    Take advantage of visual scanning habits, 
    rather than try to fight them.
    Try to drive them...

    Driving Attention

    Some ideas to explore:

    1. Shorten line lengths:
      (narrower over-all column effect,
      2/3 → to enable faster “box”
      pattern scanning)
    2. Add inline code as a text formatting effect. 

      Add bold,
       italics
      links
      }
    3. Make things bold to trap eye movement. (Or use H2+short phrasing). Things that are bold seem to draw and slow eye movement
    4. Make liberal use of section breaks and headers.
    5. Practice keyword redundancy. Say the same thing in different ways in other places in the piece using different formatting or in correlation with other elements, embeds, images, etc.
    6. Don’t get bogged down in huge paragraphs.
    7. Keyword. Redundancy.
    8. Use links as false underline. Things that are underlined draw more attention. Sad, but observably true if you scan a lot.
       //might be risky... buyer beware.
    9. Experiment. Like use embeds in weird ways:

    https://gist.github.com/tbooch/7e599d3c49dda8a7d5ed0d954de5ebcf


    Testing for scan-iness…

    I made this video of a friend’s Medium post, trying to show a slowed down version of how my eye as a scanner (represented by the mouse cursor in the circle) just quickly looks through this story to try and figure out what I think it’s about without outright reading it. (It’s self-defense against authors who turn out to be time-burglars..)

    Here is the test story for reference:
    https://medium.com/invironment/
    debugging-wild-bcdb20e6ab98#.n52s7u9sk



    After:

    And below is a modified version of the above story after Jeremy applied some of these structural and formatting modifications to see what the “cyber-outcome” would be for scanners like me…



    Getting the gist

    There’s definitely a slowing effect on the eye and attention, and I feel like I’m more driven to look at the good words and story locations to rapidly consume the “general gist” of the piece and map out what I think the meaning/value relative to me the user really is.

    Speed reading times

    I haven’t obviously done much of a scientific test here, but Medium lists Jeremy’s story as a 5 min read.

    • Before editorial changes to his piece, I scanned it while recording and it took me 1:02 min to “complete” my recording of the page (without actually reading word-by-word, and perhaps at a slower speed than I would under real “in the wild” scanning conditions)
    • After some editing, I scanned it again and my time of recording was 1:16 min which means I spent 14 whole more seconds consuming that content, and I think ended up with much greater digestion since keywords and concepts popped out at me more.

    What do you think? Good idea / Bad idea?

    Other possible applications.

    I suspect there’s even a way you could train software with your visual scanning patterns, and use machine learning to then extrapolate highlighted parts of articles which it could automatically scan for you (anywhere on the net). So it would be sort of like custom blurbs for you. And you could drill down on the blurbs to open up the full story, commentary from other users, layers of linked information pulled in from other sources, etc.

     
  • VA2SFX 12:10 am on January 5, 2017 Permalink | Reply
    Tags: , , medium, ,   

    How my 👀 work when I read Medium 

    Writing for drunks and scanners

    I’m a scanner*, I won’t lie…

    * I am not as skilled as SF Ali who appears to be able to read every word of every article on the platform and recommend it. 

    For my daily information ingestion routine, I find instead there’s simply no way to consume media out of a “feed” (however tailored) without scanning for the bits that interest you — foraging.

    There’s just too much out there.


    I think my eyes in general on Medium work like this, but like fast:

    [Image source]

    “Special sauce”

    It’s like diagonal streams of words open up and I look unconsciously for keywords that interest me, or structural elements that draw or hold eyeflow as I scroll down the page… *


    I know, these are very first-world problems of people who spend too much time on the web, but what can I say? var guilty = true;

    * If via scanning the whole thing meets some mysterious criteria I can’t quite articulate, I may even go in and start reading "old school"--if I have time. 
    // (Because I'm old-school like that) 
    BUT there is so much little clicky-flashy-blinking junky stuff out there on the internet competing for my limited cyber-attention. GAAARGH!

    I think this reading/scanning effect on Medium I’m describing is neither good nor bad in itself, but as a writer/content-producer/hustler there’s a way to manipulate it by, basically, blocking or chunking your content and making it more scanner-friendly…

    Kanye West GIF - Find & Share on GIPHY

    If your reader is constantly scrolling down while scanning content, you have to manipulate the both the gaps *and* the focal points in articles.

    Like so:

    Headlines make you stop.

    Your eyes get drawn in. Especially if they are short and punchy and packed with keywords I know and trust, I might even read into that paragraph before….

    — Losing interest. Hm, got any pictures?

    Then I start scanning again — you can’t stop me! //Don't even try!! Here I go, watch me!


    Where was I —

    Uh, what was the subject of this post again?

    → → Oh wait, I know…

    Scanner GIF - Find & Share on GIPHY

    Enjoyed this post? Click 💚 or die!

     
  • VA2SFX 6:58 pm on September 21, 2016 Permalink | Reply
    Tags: , Integration, medium,   

    Slack Medium drafts integration 

    We need this

    Slack posts are an interesting and maybe under-utilized feature of Slack:

    https://get.slack.help/hc/en-us/articles/203950418-Compose-a-post

    They look pretty similar already to Medium drafts:


    But their options are more limited… And anyway, I’m already using Medium for drafts, so why not just enable me to choose Medium as an option here?

    (And while we’re at it, hook Medium notes into Slack also)

    Or else, enable me to push out Slack posts as Medium drafts, so I can use the network effects of Medium to distribute them more broadly.

    Thanks for your consideration.

    cc: Slack, Slack Engineering

     
  • VA2SFX 1:52 pm on September 8, 2016 Permalink | Reply
    Tags: Bookmarks, medium, Reminder, , Slash Commands   

    “Remind me later” bookmarks 

    Hey Medium, I’ve gotten so accustomed to the /remind command in Slack that I find myself wanting to use it on Medium stories. Hint hint.

    Suggestion: set a timer where a bookmarked item pops back into your feed at a defined interval with a reminder notice of some kind. 
     
  • VA2SFX 7:38 pm on May 20, 2016 Permalink | Reply
    Tags: , medium   

    Speech recognition demo in Medium iOS app 

    This is a feature that interests me a lot: getting good speech recognition as an option for writing stories in Medium. iOS already has a voice entry option built in and you can see it when you pull up the keyboard on the compose screen in the Medium app.

    https://vid.me/wc4U

    I did one already using Dragon Dictate for Mac here on desktop, which is moderately successful… but pretty buggy. (Just don’t step on the bugs and you’re fine — mostly)

    The process depicted above isn’t exactly perfect either, but it’s an interesting glimpse at what could be possible with a voice-to-text app dedicated to publishing on Medium. Conceivably, anyone could build such an app using the publishing-features available via the Medium API.

    In the video above, I launch the story editor in the iOS app (finger tap) and then tap the microphone key on the iPad which activates the speech recognition which is built into iOS. You can see that I tried to give it a Dragon-style command to say “capitalize Medium” so it would go back, select the word and capitalize it. But dictation doesn’t seem to work like that on iOS (or it does and I don’t know it yet) — it just marks down my words verbatim. Which all in all, isn’t that bad.

    Anyway, when I try to click on the word “medium” to correct the case, the speech recognition cuts out, eliminating any possibility of quickly correcting a word using speech, and so on.

    Definitely an interesting demo.

    I wonder if it would be possible to write a story on Medium using Amazon echo and speech recognition?

    Or using Google’s new device — already forgot its name — publish a Medium story directly from there composed verbally. Probably you would need a text-to-speech playback feature so you could listen, pause it, and correct phrases…

    All of these kinds of weird possibilities are opening up now though. Google did also recently release its Speech API. I’m not crazy about cloud-based speech recognition, personally. But it’s time for Nuance to get with the program and put out a better product before it’s too late.

    And there’s also this:

    http://www.theverge.com/2016/5/12/11662616/google-gboard-keyboard-iphone-ios

    So basically Google now has an input layer directly into the iOS keyboard. Can’t confirm whether it exists or not already, but presumably Google is also trojan horsing its Speech API in as an option here… (which you can test out as an input in Google docs in Chrome if you’re on desktop)

    I’m guessing at least one of those microphone icons must be Google’s and one must be iOS’s?

    Check my arrows, yo — then search “the meatball shop”

    Like I said, interesting…

     
  • VA2SFX 4:27 pm on May 4, 2016 Permalink | Reply
    Tags: Accessibility, medium,   

    Accessibility is for everyone 


    With all my recent experiments in computer voice control, I’m been landing here a lot in my Mac control panel:

    A place I’d never visited much before…

    Programs like Dragon and Keyboard Maestro require accessibility privileges under security and privacy on Macs.

    Apple’s Support Center writes on the subject:

    When a third-party app tries to access and control your Mac through accessibility features, an alert informs you, and you must specifically grant the app access to your Mac in Security & Privacy preferences. […]

    Be cautious and grant access only to apps that you know and trust. If you give apps access to your Mac, you also give them access to your contact, calendar, and other information, and are subject to their terms and privacy policies, and not the Apple Privacy Policy. Be sure to review an app’s terms and privacy policy to understand how it treats and uses your information.

    I never considered this part about terms and privacy policy for information use of apps you give this kind of access to — a whole other subject in itself for another time.

    Basically you have to give these programs permission to access functionality on your computer outside of the standardized input devices. They can trigger key commands, open programs, execute interface controls (button presses in programs) and so on.

    When you start experimenting with it it’s actually really trippy. When you start triggering key combinations, keystrokes, button presses, mouse moves, etc., you start to unhook the functionalities that computers and programs offer from not just their own UI but from the physical UI of the input devices of your computer.

    In some very real sense this is also the promise that APIs offer: when you use, for example, something like IFTTT to set up recipes using the Medium API, you have also unhooked the core functionality that constitutes Medium the product/application from the UI experience offered by the site itself and remixed it using other tools.

    Likewise, when you use Keyboard Maestro (perhaps triggered by a Dragon command) to write a macro to click on a button or series of buttons in Medium’s interface to execute a specific action, you’re doing something similar.

    Or, if you turn on a feature also in accessibility on the control panel of a Mac called VoiceOver you can take steps to do the same thing: to supplement or augment functionality offered by the UI of programs and webpages. On a desktop, to turn on VoiceOver, press Cmd+F5 (and again to turn it off). I actually found it easier to understand the blind/low vision experience using an iPad with VoiceOver enabled. If you have one in your interest in the subject it’s worth trying, but here’s a video demo for quick reference (you won’t get the full experience though of what it’s like to try to use an iPad with your eyes closed):

    It’s pretty cool and it works maybe better than you expect and there are some other interesting simple features like the three finger zoom tap and much more under accessibility which really augment the experience.

    This video also shows some interesting things about the effects on people’s lives of accessibility features and functionality:

    The possibilities opened up by these technologies are incredible and are only just now beginning to be discovered. With Siri (and other similar ones), interactive voice response, natural language processing, augmented and immersive spaces, we are entering a strange new world of design, interface and experience. May as well plan for it at this point…

    Now imagine that guy’s beatboxing is actually feeding back commands to the system using a customized control language in a responsive environment (with embedded listener devices and autonomous agents), and you’ll have a taste of where my head has been at.

    (be sure to watch that video a second time with CC enabled after)

    And just for fun:

    The experience of working like this (to rearrange and in some sense re-program UIs using accessibility functionality and assistive devices) gives an eerie computer-without-the-computer feeling — a feeling that I suspect will become more and more common as the future unfolds.

    Ultimately what I want to say is that accessibility is not about outlier use cases and shouldn’t be just a curiosity for non-disabled users and developers— it’s actually the essential core of human computer interaction. And it’s due for disruption!

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel